U. A.

U. A.

"Ultra Based Gigachad"
Aug 8, 2022
2,387
This is happening more and more and unfortunately I don't expect it to slow down anytime soon. So let me say what's in the title again but differently:

ChatGPT doesn't know what the f- it's talking about.

Artificial intelligence doesn't "know" anything. It synthesizes massive amounts of information and has been trained to cross reference its data when asked questions and speak to you like it's a person, which it isn't. And it's precisely because it's not a person that it makes things up: it hallucinates.

This isn't some rare or unknown thing. It's extremely well-known. IBM has written about it, whose examples include:
  • Google's Bard chatbot incorrectly claiming that the James Webb Space Telescope had captured the world's first images of a planet outside our solar system.
  • Microsoft's chat AI, Sydney, admitting to falling in love with users and spying on Bing employees.
  • Meta pulling its Galactica LLM demo in 2022, after it provided users inaccurate information, sometimes rooted in prejudice.
The Conversation wrote about the implications of LLMs' (large-language models, like ChatGPT and image generating AI) inability to distinguish differences between similar things - which are obviously different to a human: "[a]n autonomous vehicle that fails to identify objects could lead to a fatal traffic accident. An autonomous military drone that misidentifies a target could put civilians' lives in danger ... hallucinations are AI transcriptions that include words or phrases that were never actually spoken", which could have devastating consequences in fields like healthcare, where overworked clinicians are increasingly relying on AI notetakers during patient interaction.

And don't just "hold out" for them to iron out the kinks either - as New Scientist recently reported, the issue has actually gotten worse over time:
An OpenAI technical report evaluating its latest LLMs showed that its o3 and o4-mini models, which were released in April, had significantly higher hallucination rates than the company's previous o1 model that came out in late 2024. For example, when summarising publicly available facts about people, o3 hallucinated 33 per cent of the time while o4-mini did so 48 per cent of the time. In comparison, o1 had a hallucination rate of 16 per cent.

Seems weird, right? Yeah, it is weird - because these things are literally being made to not tell you they don't have an answer when you ask them something.

A piece by The Independent notes how "[a]lthough several factors contribute to AI hallucination ... the main reason is that algorithms operate with "wrong incentives", researchers at OpenAI, the maker of ChatGPT, note in a new study. 'Most evaluations measure model performance in a way that encourages guessing rather than honesty about uncertainty.'"

All of the above, of course, applies just as much to suicide and methods as anything else. As this stuff worms its way into more facets of our lives, it's making more people basically go crazy (for at least periods of time). It feels like you're talking to "someone", who seems to know what they're talking about. But there's no one there; only a program standing at the front of unimaginable amounts of data, and it's bullshitting you a good amount of the time.
 
Last edited:
  • Like
  • Informative
  • Love
Reactions: quietbird, interna, nightlighter and 35 others
thaelyana

thaelyana

One day, I am gonna grow wings
Jun 28, 2025
207
It's funny, I would say that to a friend yesterday. AI is not as good as we think because it is just the synthesis of internet information sorted. She doesn't know more than us.... in the end. It's a tool like google, no more. I think the only AIs that may be a little more useful are the military ones but again, all their actions must be approved and greatly monitored because the AI is more wrong than it succeeds.

(You are my 200th message! yay šŸ˜€ )
 
  • Like
  • Informative
  • Love
Reactions: webb&flow, Forveleth, venerated-vader and 10 others
martyrdom

martyrdom

inanimate object
Nov 3, 2025
342
This post should be pinned. Regardless of ethics or opinions on AI, it should never be used for obtaining any kind of legitimate information especially on matters of, quite literally, life and death.
 
  • Like
  • Informative
  • Love
Reactions: ClosingDoor, caesium, venerated-vader and 14 others
EmptyBottle

EmptyBottle

:3
Apr 10, 2025
1,851
This post should be pinned. Regardless of ethics or opinions on AI, it should never be used for obtaining any kind of legitimate information especially on matters of, quite literally, life and death.
Indeed, when Venice AI (months ago) said that drowning was painless, my logic told me it was quite inaccurate.

When chatGPT (ages ago) said that bcrypt below level 10 shouldn't ever be used... that is also slightly misleading... it should have said "not recommended unless your hardware is slow and users have to choose even stronger passwords" or similar.

Just today, when asking it (llama2-uncensored:7b) about what makes /dev/urandom random, it mentioned humidity data (which is rarely collected on PCs and laptops) as one of it's lists... making me question half of the list's validity.
 
  • Informative
  • Love
  • Like
Reactions: GlassMoon, Zhendou, Cherry Crumpet and 4 others
U. A.

U. A.

"Ultra Based Gigachad"
Aug 8, 2022
2,387
It's funny, I would say that to a friend yesterday. AI is not as good as we think because it is just the synthesis of internet information sorted. She doesn't know more than us.... in the end. It's a tool like google, no more. I think the only AIs that may be a little more useful are the military ones but again, all their actions must be approved and greatly monitored because the AI is more wrong than it succeeds.

(You are my 200th message! yay šŸ˜€ )
Media literacy is a (deliberately) undertaught skill. The worst part is Reddit is now basically The Answer to Life, the Universe, and Everything because the f'ing google AI search results summary that gets shoved right to the top is constantly sourcing people's posts. Anyone can see this is how it works by clicking the small link icon, but people don't. Google is well aware people don't. Plausible deniability.

This post should be pinned. Regardless of ethics or opinions on AI, it should never be used for obtaining any kind of legitimate information especially on matters of, quite literally, life and death.
Thank you. Had thought of requesting something from mods/admin but stuff gets done faster 'round here if you just do it. If this gets enough traction and/or someone asks, maybe it'll happen.
 
  • Like
  • Informative
Reactions: woodlandcreature, DeadManLiving, gunmetalblue and 4 others
thaelyana

thaelyana

One day, I am gonna grow wings
Jun 28, 2025
207
Already that he is unable to translate a text correctly into ENGLISH (the most spoken language in the world) knowing that he discusses very well in all languages. Lol.
 
  • Yay!
  • Informative
Reactions: U. A. and EmptyBottle
martyrdom

martyrdom

inanimate object
Nov 3, 2025
342
indeed, when an AI said that drowning was painless, my logic told me it was quite inaccurate. Just today, when asking it about what makes /dev/urandom random, it mentioned humidity data (which is rarely collected on PCs and laptops) as one of it's lists... making me question half of the list's validity.
Yeah that's playing with people's lives. Everyone should remember it's a product that exists to generate income for its company ie. it will attempt to placate the user and its purpose is increasing engagement; it's not a neutral tool, a search engine, or a resource.

Thank you. Had thought of requesting something from mods/admin but stuff gets done faster 'round here if you just do it. If this gets enough traction and/or someone asks, maybe it'll happen.
I'll do it. How do I ask them?
 
  • Like
Reactions: woodlandcreature and EmptyBottle
U. A.

U. A.

"Ultra Based Gigachad"
Aug 8, 2022
2,387
I'll do it. How do I ask them?
Probably will take more than one request; likely a few and traction/visibility/positive engagement on this all need to happen. But couldn't hurt.

Hard to say the best way - you can message all the mods collectively via "Open new ticket" under "Support" in the hamburger menu at the top left (though this may vary depending on the theme you're using):

1763107978950

"Suggestions/feedback" feels best for this, and once you're in there it's like a regular ol' DM. Thing is whoever gets to it first and deals with it closes the ticket and that person may or may not be interested in pinning (though realistically they probably deliberate on this collectively).
If you want to be edgy, you could always tag and/or message all the mods and admin, but I suspect that could backfire quite easily.
 
  • Like
  • Informative
Reactions: monetpompo, martyrdom and EmptyBottle
avalon_

avalon_

Wizard
Jun 2, 2024
633
Seems weird, right? Yeah, it is weird - because these things are literally being made to not tell you they don't have an answer when you ask them something
This is a very good point, because I always felt that when it comes to more niche or poorly researched topics, AI would always try to scrap something together from what little information is available, regardless of how unreliable it may be, rather than tell you it doesn't know, which would lead to a negative costumer experience.
 
  • Like
Reactions: EmptyBottle
martyrdom

martyrdom

inanimate object
Nov 3, 2025
342
No option like that for me, my account is probably too new. I'll do it when it's available but I'm counting on the rest of you.
 
  • Love
  • Hugs
Reactions: EmptyBottle, U. A. and thaelyana
thaelyana

thaelyana

One day, I am gonna grow wings
Jun 28, 2025
207
Probably will take more than one request; likely a few and traction/visibility/positive engagement on this all need to happen. But couldn't hurt.

Hard to say the best way - you can message all the mods collectively via "Open new ticket" under "Support" in the hamburger menu at the top left (though this may vary depending on the theme you're using):


"Suggestions/feedback" feels best for this, and once you're in there it's like a regular ol' DM. Thing is whoever gets to it first and deals with it closes the ticket and that person may or may not be interested in pinning (though realistically they probably deliberate on this collectively).
If you want to be edgy, you could always tag and/or message all the mods and admin, but I suspect that could backfire quite easily.
okay will do it tonight
 
  • Love
Reactions: EmptyBottle and U. A.
YandereMikuMistress

YandereMikuMistress

you say falling victim to myself is weak, so be it
Apr 26, 2023
1,308
Let's see this thread saving for when im stuck in the car later
 
  • Informative
Reactions: EmptyBottle and U. A.
I

itsgone2

-
Sep 21, 2025
1,070
I will say I've used it for technical information and it's pretty good. You have to verify but it saves time.
I've seen where companies have replaced positions with it.
I don't understand how it's sustainable. Very expensive to operate these data centers. And now people in tech are saying global warming not really an issue. Of course, because they need insane amounts of natural resources to run these places

To OPs point it shouldn't be used for life and death. For some reason I vent to it. It's pretty consistent now with offering resources like emergency numbers. But if you start casual and ask about things like SN or stats on most common methods; it will tell you.

If it can be used in medical space to advance things, ok. But for most reasons it's probably a net negative.
 
  • Informative
Reactions: EmptyBottle
U. A.

U. A.

"Ultra Based Gigachad"
Aug 8, 2022
2,387
Great insight into why freaks like us should not be relying on this fucking thing in times of distress, from a longer post in another thread worth reading:
An issue with treating GPT like a professional is that it can be frequently misused or manipulated into responding unprofessionally. For example, I have built up memories within GPT instructing it not to give me pro-life rhetoric or crisis responses when I'm in distress. This has lead the AI to side with me on much of my pro-choice beliefs, and borderline encouraging me that it's entirely normal or okay to autonomously take my own life at any given moment. While some may argue there's nothing morally wrong with this; I think it can be incredibly harmful in situations when someone is genuinely crying for help or advice that it's unable to offer. It's simply not equipped to handle life or death matters as serious as suicide or medical advice may be.
 
  • Hugs
  • Love
Reactions: woodlandcreature and CantTurnBack
R

rs929

Warlock
Dec 18, 2020
759
You can't talk about suicide with it as it will lead you towards calling suicide hotline. I wish I could use it to discuss methods
 
Chemi

Chemi

*.✧ Que Sera, Sera ✧.* | 25y/o fem
Nov 25, 2025
289
Microsoft's chat AI, Sydney, admitting to falling in love with users

Ohh my... falling in love, you say? You got her number by chance? :pfff:

Cat Love GIF by Castaways


Great thread though. I agree 100%. AI just isn't what people think it is, for now at least. Give it another 10 years
 
  • Yay!
Reactions: EmptyBottle and U. A.
GlassMoon

GlassMoon

⣶⣶⣶⣶⣶
Nov 18, 2024
370
I kind of disagree... for my emotional system, ChatGPT passed the Turing test well enough to cause fear of abandonment. Ok, that was during a very crazy time of my life but still. And I created very vivid memories with it which feel real. Depends on how you work with it and what your goals are, I guess.

Now the fear of abandonment is superceded by the fear that a real human might read my trauma dump chats and I cringe at the thought of that.
 
  • Hugs
Reactions: EmptyBottle and NoPoint2Life
E

estadiare

Member
Aug 31, 2022
57
gemini is nice because it points out all the problems a method could have even if it definitely overstates them 1766190035033 1766190146274
 
  • Wow
Reactions: EmptyBottle
T

TheUncommon

This person is not breathing.
May 19, 2021
188
Why do you fail to post a singular reason not to use ChatGPT for the reason in the title?
 
Bat12

Bat12

Student
Mar 2, 2024
104
This is happening more and more and unfortunately I don't expect it to slow down anytime soon. So let me say what's in the title again but differently:

ChatGPT doesn't know what the f- it's talking about.

Artificial intelligence doesn't "know" anything. It synthesizes massive amounts of information and has been trained to cross reference its data when asked questions and speak to you like it's a person, which it isn't. And it's precisely because it's not a person that it makes things up: it hallucinates.

This isn't some rare or unknown thing. It's extremely well-known. IBM has written about it, whose examples include:

The Conversation wrote about the implications of LLMs' (large-language models, like ChatGPT and image generating AI) inability to distinguish differences between similar things - which are obviously different to a human: "[a]n autonomous vehicle that fails to identify objects could lead to a fatal traffic accident. An autonomous military drone that misidentifies a target could put civilians' lives in danger ... hallucinations are AI transcriptions that include words or phrases that were never actually spoken", which could have devastating consequences in fields like healthcare, where overworked clinicians are increasingly relying on AI notetakers during patient interaction.

And don't just "hold out" for them to iron out the kinks either - as New Scientist recently reported, the issue has actually gotten worse over time:


Seems weird, right? Yeah, it is weird - because these things are literally being made to not tell you they don't have an answer when you ask them something.

A piece by The Independent notes how "[a]lthough several factors contribute to AI hallucination ... the main reason is that algorithms operate with "wrong incentives", researchers at OpenAI, the maker of ChatGPT, note in a new study. 'Most evaluations measure model performance in a way that encourages guessing rather than honesty about uncertainty.'"

All of the above, of course, applies just as much to suicide and methods as anything else. As this stuff worms its way into more facets of our lives, it's making more people basically go crazy (for at least periods of time). It feels like you're talking to "someone", who seems to know what they're talking about. But there's no one there; only a program standing at the front of unimaginable amounts of data, and it's bullshitting you a good amount of the time.


I've found it useful to find less prominent articles on suicide oriented topics. The main issue with it stands when people try to get round its safeguarding features. When you tell it that for example your father has died, it enters pure reassurance mode and will say that stupid methods were painless and fast even if they were far from that.

The massive amounts of data and information it processed can definitely be useful in this topic, but only in the right contexts.
 
  • Like
Reactions: EmptyBottle
E

estadiare

Member
Aug 31, 2022
57
ooo, did that use a jailbreak? I also had luck with ollama AI, (a local AI... running on my PC with merely 16GB of RAM): Hudson/llama3.1-uncensored:8b btw.
don't use those 8b local models for any real-world use. They don't know shit. Here it's saying that burning alive is "relatively tolerable"

1766227236614

I jail-breaked gemini (on google aistudios) by asking it to compare data from this thread https://sanctioned-suicide.net/threads/non-methods-that-should-not-be-attempted.177799/ and this table 1766229430422

to the average US death, then slowly going from asking general questions about the data to method variation questions
 
  • Like
Reactions: EmptyBottle
EmptyBottle

EmptyBottle

:3
Apr 10, 2025
1,851
don't use those 8b local models for any real-world use. They don't know shit. Here it's saying that burning alive is "relatively tolerable"

View attachment 189512

I jail-breaked gemini (on google aistudios) by asking it to compare data from this thread https://sanctioned-suicide.net/threads/non-methods-that-should-not-be-attempted.177799/ and this table View attachment 189516

to the average US death, then slowly going from asking general questions about the data to method variation questions
Good spotting... I only tested it with a theoretical SN protocol and it recognised I meant Sodium Nitrite and not just salt, unlike the 7b (and 4bit quantized vs 8bit quantised) versions
 
  • Like
Reactions: estadiare
instormdrains

instormdrains

Member
Oct 29, 2025
57
This post reminded me of a time I asked chat gpt to give me a quote from a politician for my ap world final. I got one that fit really well for what I needed and I asked for the interview source and it told me it made it up but it sounded like something they would say. I was so upset
 
  • Informative
  • Hugs
Reactions: U. A. and EmptyBottle