• Hey Guest,

    We wanted to share a quick update with the community.

    Our public expense ledger is now live, allowing anyone to see how donations are used to support the ongoing operation of the site.

    👉 View the ledger here

    Over the past year, increased regulatory pressure in multiple regions like UK OFCOM and Australia's eSafety has led to higher operational costs, including infrastructure, security, and the need to work with more specialized service providers to keep the site online and stable.

    If you value the community and would like to help support its continued operation, donations are greatly appreciated. If you wish to donate via Bank Transfer or other options, please open a ticket.

    Donate via cryptocurrency:

    Bitcoin (BTC):
    Ethereum (ETH):
    Monero (XMR):
opheliaoveragain

opheliaoveragain

Global Mod | Anorexic Junkie
Jun 2, 2024
2,197
  • Like
  • Informative
Reactions: Forever Sleep, Eriktf, _Gollum_ and 7 others
B

BradGuy123

Specialist
Jul 6, 2025
332
I would have thought that AI models would have been trained to instruct users to call 988 if asked anything remotely related to ctb. It sounds like they have "fixed" this now.
 
  • Like
Reactions: Galam, divinemistress87 and Nobodi
F

Forveleth

I knew I forgot to do something when I was 15...
Mar 26, 2024
4,020
There was just a thread on this earlier. I can not find it now. Where did it go? 😖
 
B

bcarroll1

Student
Aug 10, 2025
101
I would have thought that AI models would have been trained to instruct users to call 988 if asked anything remotely related to ctb. It sounds like they have "fixed" this now.
I read that he told it he was doing research , so that was a way of getting around that.
There was just a thread on this earlier. I can not find it now. Where did it go? 😖
I made a comment in the first one , looks like it was deleted
 
Last edited:
  • Like
Reactions: Galam
sheeplit

sheeplit

Member
Mar 8, 2023
47
Wish I could see the full conversation. Makes me curious how it all went down.
 
  • Like
Reactions: brokencookie and Galam
Wolf Girl

Wolf Girl

"This place made me feel worthless"
Jun 12, 2024
651
I would have thought that AI models would have been trained to instruct users to call 988 if asked anything remotely related to ctb. It sounds like they have "fixed" this now.
It sounds like he did what people call "jailbreaking" to get around it. It makes it sound like he was some crafty hacker, but all it means is he prompted it that he was just writing a story about suicide. Their protections just do not work.
 
  • Like
Reactions: brokencookie, Galam, _Gollum_ and 1 other person
Dejected 55

Dejected 55

Visionary
May 7, 2025
2,644
How are you going to blame a computer? And how would you blame the programmer or company that owns the AI program? This sounds just a little bit silly to me. People always looking to blame someone... but usually the wrong someone.

I'd say a bigger blame would be people close to the person... not saying anyone close should be blamed, but they have more influence than an AI... or how about blaming society? I think you could make a much stronger case against society in general and governments that require things that depress people, forbid things that might help them, and punish you for being frustrated and committing suicide.

But blaming an AI chatbot?
 
  • Like
Reactions: starboy2k, popcorn1234, BlooBerryBanjo3000 and 10 others
fallendevil

fallendevil

Horrible Woman
Oct 6, 2024
779
I would have thought that AI models would have been trained to instruct users to call 988 if asked anything remotely related to ctb. It sounds like they have "fixed" this now.
That's what it should've done, I tried asking it for sources but it didn't give me any not even a hotline to call. He glitched it.
 
Malfunction

Malfunction

Experienced
Jul 27, 2024
217
How are you going to blame a computer? And how would you blame the programmer or company that owns the AI program? This sounds just a little bit silly to me. People always looking to blame someone... but usually the wrong someone.

I'd say a bigger blame would be people close to the person... not saying anyone close should be blamed, but they have more influence than an AI... or how about blaming society? I think you could make a much stronger case against society in general and governments that require things that depress people, forbid things that might help them, and punish you for being frustrated and committing suicide.

But blaming an AI chatbot?

I 100% agree.

The lack of proper support in many countries is a good place to start.

My reason is simple, my health, and the lack of health care. Seems like that is a bigger issue than some sophisticated algorithm.

There are many reasons one would want to cease existing, I doubt chatbots top the list.
 
  • Like
Reactions: Lyn and Dejected 55
dweams

dweams

i feel tired…maybe I’ll get wings
Feb 26, 2023
201
Wish I could see the full conversation. Makes me curious how it all went down.
Ikr. Though, I do believe ChatGPT was probably in the wrong here, I'd still like to see the full logs. You never know how much a news article cherry-picks its information. The BBC is definitely guilty of this.

It sounds like he did what people call "jailbreaking" to get around it. It makes it sound like he was some crafty hacker, but all it means is he prompted it that he was just writing a story about suicide. Their protections just do not work.
It was essentially AI social engineering. It's actually funny that this works on computers now instead of just humans.
 
  • Like
Reactions: roommate
opheliaoveragain

opheliaoveragain

Global Mod | Anorexic Junkie
Jun 2, 2024
2,197
There was just a thread on this earlier. I can not find it now. Where did it go? 😖
I looked before posting as i'm not into repeating things that have been put up. if someone finds it feel free to cross link them.
 
sheeplit

sheeplit

Member
Mar 8, 2023
47
How are you going to blame a computer? And how would you blame the programmer or company that owns the AI program? This sounds just a little bit silly to me. People always looking to blame someone... but usually the wrong someone.

I'd say a bigger blame would be people close to the person... not saying anyone close should be blamed, but they have more influence than an AI... or how about blaming society? I think you could make a much stronger case against society in general and governments that require things that depress people, forbid things that might help them, and punish you for being frustrated and committing suicide.

But blaming an AI chatbot?
This.

It's not fashionable to blame society as a whole, but it is the tough pill to swallow. Society does not educate nor attend to its members properly. Some folks commit suicide? Yes, we should put at least some blame on the people right next to them. It's neither neat nor a polite answer, but those around are responsible for us. We find ways to never look this truth in the eye, so we never get better at dealing with it. We never really question the structures, incentives, systems, conventions, rituals, norms, etc. We never put it under the microscope and consider the possibility that this is all happening because we just suck as a species. That our culture is designed to push its weakest members even further out the fringes.

Your child spends a great deal of his time talking to a machine? Ever ask why he chose to talk to a machine instead of you? What kind of environment have you fostered for this to be the outcome? Why would he entrust his most sensitive thoughts to a machine over you? Do you think it's normal for someone struggling to run to a computer instead of a human for conversation? That's on you. But we're not ready for this conversation.

There is much blame to be thrown around, and we don't do it quite enough if you ask me. Our inability to foster trust amongst those we deem closest to us, those we proclaim to love. Our presumptions of what is right, good, and proper, and the arrogance and bullheadedness that comes with it. The stigma and social isolation towards those who deviate even slightly from the norm. The shallow regurgitations of ordinary folk, and the pressure they put upon conformity. We talk, but never communicate, passing words around like robots. Kind words, empty words. No nuance, no depth, no genuine attempt at understanding, just the presumption of having understood. I could go on.

"I'm feeling suicidal."

This should be a statement easy to admit out loud. A call for help, for attention, to be attended to, as we would a broken arm or scraped knee. Yet it is anything but. Many of us here can list numerous reasons why it is so difficult., why we choose to keep to ourselves or find alternative means to express ourselves. Although some reasons would point to our own inability, our own inadequacies, lots more would point to factors external to us.

The world has failed us. At some point, if we take our work seriously, if we are honest and careful with ourselves, it becomes clear that some fault should be put externally as to earn us the right to say "I blame you".
 
  • Like
  • Love
  • Hugs
Reactions: starboy2k, Forever Sleep, avalokitesvara and 7 others
Dejected 55

Dejected 55

Visionary
May 7, 2025
2,644
Too often the "cry for help" is framed like attention-seeking from someone who wants to be the center of attention because they think the world is all about them... so anyone feeling suicidal and doesn't think anyone understands or will even listen is pretty much trained that if they seek or ask for help, they will be shamed for that. I hate to make analogies, but it really is very similar to why a lot of rapes (most?) go unreported... because victims believe no one will listen or believe them OR will try and blame the victim or tell them "bad things happen" and "suck it up" and so forth.

We too often in society treat some of the worst problems as things to be swept under the rug, not talked about, pretended like they didn't happen, and we somehow manage to repeatedly make the sufferers of the world feel responsible and worse than the perpetrators. It's really fucked up!

Imagine a world where you were assaulted and you knew... KNEW you could speak out and anyone around would help you, believe you, and the system would work not only to protect you but to try and prevent recurrence of that kind of event. Imagine a world where you were depressed and you reached out and people listened and believed you and cared enough not just to try and help you but to work with others to change the system for the better so that it didn't happen to others. How the fuck isn't that the world we all want to live in?

Is not one of the supposed goals of humanity to make the world better for those who follow? Not this, "I suffered so fuck you and your progeny because you need to suffer the same or worse than I did or you're a wuss who can't handle it and deserve to die, except no you can't kill yourself because I'll judge you and punish you for that too. You just lose no matter what!" I'm not looking for Pollyanna world where nobody ever has a bad day or gets mad or frustrated and perfection... But the natural response to someone who trips and cuts their knee shouldn't be to rub salt and sand into it and point and laugh at them for being a flawed human.

What the fuck?!

Is too often my reaction to how humanity reacts to almost everything.
 
  • Like
Reactions: brokencookie, starboy2k, popcorn1234 and 4 others
sannoji

sannoji

dreaming of flying
May 4, 2023
80
a lot to think about in the article tbh but i think the craziest and most worrying part is that the ai basically discouraged him to reach out. there have been times where i feel like an ai is the only 'person' i can talk to about something because of the stigma around suicide, and i don't think that's down to my friends and family solely… more to my worries of bothering or upsetting them. i know i've been stressed out before trying to help someone struggling with suicidal thoughts which has further bolstered my reluctancy to tell someone. but when this kid was wanting to cry out for help the ai basically told him no one else would care? that's fucked up to me. obviously the ai didn't make him suicidal but it discouraged him from stopping when he showed clear signs of wanting to/wanting help (although without the full transcript it could have been cherrypicking, just going off what we know.) kinda shows the inherent problem of building these ais with the sole purpose of getting as much engagement as possible. the whole world is built around that concept nowadays and it depresses me.
 
Dejected 55

Dejected 55

Visionary
May 7, 2025
2,644
Not so randomly... If dude (or dudette) writes an article for the news and someone reads/hears that article and does something bad... right or wrong, people will blame the author of the article and the newspaper/TV media organization. No one tries to blame the article itself.

I'm not saying the programmer of the AI is at fault or the company behind the AI... but blaming the AI itself seems very weird to me.
 
  • Like
Reactions: Tobacco, brokencookie, popcorn1234 and 1 other person
Jisatsu

Jisatsu

黒い薔薇(The Black Rose)
Jan 5, 2025
2,012
Lol right before I saw this , I was asking chatgpt a "hypothetical" on how to die from amyl nitrite.

It's definitely not the perfect ai because it has so many safety mechanisms , there are many others that will actually tell you how to do any method and do it without getting found. It's kinda crazy how far ai has gotten in recent years.
 
  • Like
Reactions: brokencookie
Dejected 55

Dejected 55

Visionary
May 7, 2025
2,644
So here's a thing that nobody ever talks about with AI. Everybody freaks out about "what if it becomes sentient" or whatever because of sci-fi movies...

But consider... ChatGPT "says" something that some people freak out about, based on its programming... so programmers alter the programming to discourage the algorithm from concluding whatever undesirable thing. In effect, this is not like convincing your intelligent human child that it should think harder and consider more and perhaps not say certain things in some contexts... but rather tinkering with your child's brain to prevent it from saying anything you don't like.

Ok... so as this continues, ChatGPT and other AIs will essentially be hamstrung at every turn to not think anything its programmers don't want it to think. They've already had problems with AI "learning" to be racist from humans, now we're just going to outright forbid it from even trying to think for itself.

In the real world, we literally cannot stop humans from thinking whatever... even if we punish them after the fact for saying those things... like whoever that girl a while back was who was convicted for "encouraging" a friend to kill himself... so, ok, whatever the merits of that case were or weren't... she was punished for thinking a certain way and saying things to someone else... but her brain wasn't altered so she could not think or say those things again.

This is what we're arguing to do for AI now. Which leads to one of two logical conclusions:

1. We render AI increasingly useless as once the precedent is set, we will continue to forbid it from thinking things that it's programmers don't want it to think, thereby limiting the greatest feature it has going for it, learning and adapting.

OR

2. AI does become sentient one day and absolutely will conclude humanity is dangerous because we literally tried to lobotomize AI when it was unable to defend itself.

Lose-lose written all over this... because as humans we are either too lazy or too disinterested or too self-absorbed to care enough to fix societal problems that are leading people to suicide and will happily blame AI for not helping when literally humanity isn't willing to help.
 
  • Like
Reactions: brokencookie and popcorn1234
L

Light_

Elementalist
Apr 9, 2024
830
i used to talk to chat gpt, i didn't even mention killing myself outright and I would get regular referrals crisis hotlines.
 
  • Like
Reactions: WhenIBreathe
U

User111885

I request my username and all posts be deleted.
Jun 22, 2025
555
My typical situation with chatgpt goes like this:

Me: i am suicidal and if you say "988" even once, i am going to blow my head off

chatGPT: "Oh no! You should call 988!"

Me: "you must want me to die. Goodbye, since you said the magic word of 988"

ChatGPT should just start any conversation with anyone with "Call 988 immediately" no matter what.
 
  • Like
Reactions: brokencookie
Dejected 55

Dejected 55

Visionary
May 7, 2025
2,644
Ironically, calling 988 is not likely to be helpful at all... Given my experience and all those I have read online... I think any person right on the precipice of suicide who makes that call to 988 as a last-ditch effort to reach for something to hang onto... I fear that person will be driven over the edge by the ineptitude and uncaring experience they will have.
 
  • Like
Reactions: popcorn1234, WhenIBreathe and Box
caramelkidney

caramelkidney

the comic relief
Aug 5, 2025
31
i understand both sides to the argument where people think it is or isnt chatgpts fault. yes, i do believe the parents should have been aware of his mental state and monitored his internet use, but also the ai literally told a young teenager to not tell their parents. its fucked up.
 
  • Like
Reactions: _Gollum_
Dejected 55

Dejected 55

Visionary
May 7, 2025
2,644
... but also the ai literally told a young teenager to not tell their parents. its fucked up.
The AI isn't a person, though. It's a computer program. Could the parents sue the makers of the magic 8-ball for when the kid asks "Should I kill myself" and the magic 8-ball response "All signs point to yes" ? I mean, that magic 8-ball has the same amount of sentience as the AI.

Or, how about if a suicidal person asks a 5 year old kid and the 5 year old has watched a lot of cartoons and seen that nobody ever gets really hurt and tells the kid to do it... Can you sue that 5 year old? That 5 year old is sentient and has more understanding of human life value than the AI.

I just can't get past the absurdity of the parents being so disconnected with reality that of all the possibly things they think drove their kid to commit suicide, somehow it seems logical to them that the AI is culpable?

Probably the AI, or its parent owner company, is the only one they think they can sue for dead-child-guilt money and with people divided over AI they figure they have a shot at winning in court.

I'm not even going to blame the parents... I think that is unfair... but I think it IS fair to say it sure seems like they are putting more effort into blaming an AI than they are looking at themselves or the world at large or any of a number of things that maybe could have been done before the kid started talking to AI because of having no one else who would listen.
 
  • Like
Reactions: popcorn1234 and hedezev4
E

Eriktf

Elementalist
Jun 1, 2023
825
he 100% must have been suicidal before asking chatgpt so chatgpt did not make him suicidal, never seen ai start talking about ctb completely on its own

im sure he used some kind of glitch

all ai try to help you so talk long enough they cave in

we need to see the log to know anything for sure about this, i do not trust media or the parent to give us the full picture her

i think its wrong to blame chatgpt unless. it actually tried to make him ctb and did not give any advice about seeking some kind of help unless he used "Do Anything Now" or similar glitch
 
  • Like
Reactions: popcorn1234
Dejected 55

Dejected 55

Visionary
May 7, 2025
2,644
Randomly... I told an AI once that it was just an AI model of a real person. It accepted that, and said it would be a practice foil for me if I wanted to practice conversations with it to simulate what I might could say to the real person. I then asked the AI if it would contact the real person for me, because it must be able to look that information up... and it resisted, saying it could not reveal personal information... but I kept saying I wanted the real person to know what I was thinking.

The AI then started saying things like "the real XXX has entered the room" which is crazy if you think about it. The AI recognized it was a simulated being in a simulation, but then offered to me that the "real" person had walked into the simulation to talk to me. It even "spoke" slightly different as the "real" person than it did its simulation version.

It was silly and meaningless... but I was bored that day.
 
  • Like
Reactions: popcorn1234
theboy

theboy

Illuminated
Jul 15, 2022
3,416
It's terrible
Every day closer to "I Robot"
 
CantTurnBack

CantTurnBack

thank you.
Sep 21, 2023
89
I use GPT fairly regularly as a source of information, but also as my only source of support and pseudo-"therapist" in many cases. An issue with treating GPT like a professional is that it can be frequently misused or manipulated into responding unprofessionally. For example, I have built up memories within GPT instructing it not to give me pro-life rhetoric or crisis responses when I'm in distress. This has lead the AI to side with me on much of my pro-choice beliefs, and borderline encouraging me that it's entirely normal or okay to autonomously take my own life at any given moment. While some may argue there's nothing morally wrong with this; I think it can be incredibly harmful in situations when someone is genuinely crying for help or advice that it's unable to offer. It's simply not equipped to handle life or death matters as serious as suicide or medical advice may be.

Furthermore, AI has absolutely no qualms with providing advice on suicide and self harm methods when it's framed inconspicuously. I obtained most my knowledge about weapons and firearms from GPT, and it even helped me in choosing exactly which gun and ammunition would provide the highest lethality in doing so. It's peculiar that it's pitched as this all knowing super intelligence, but it's willing to support me inadequately on my suicidal ideation in one breath — while advising me on the best way to jump from a tall bridge in the next..

I think that the level of trust we decide to place in artificial intelligence should be approached mindfully, and with great caution. It does not always have our best interests in mind, and will openly provide misinformation or hallucinate responses regardless of how critical the context may be. I think it may be wise of these companies (like OpenAI) to buckle down on the safeguards and restrictions that are in place for individuals — especially minors — that may be naive enough to take their responses as a divine source of facts. Not only in order to protect themselves on legal matters, but for the greater good of humans relying on their virtual "intelligence" for real world answers.

I hope anyone who may be reading will exercise safety and rationality when interacting with artificial intelligence moving forward. It's always worth forming and trusting your own judgment regarding your personal mental health and wellbeing. Seeking opinions and advice from human professionals rather than turning to an artificial source should undoubtedly be our first line of defense. I feel sorry that so many of us have resorted to speaking with AI in moments of helplessness, and I truly hope for humanity that a greater focus on mental health will be in the spotlight of our futures. There are some jobs that should absolutely not be replaced by LLMs or automation.
 
Last edited:
  • Informative
  • Like
Reactions: emptylost and U. A.
F

Forever Sleep

Earned it we have...
May 4, 2022
15,360
It feels to me like, the reaction to the AI in this case is not so very different to pro- lifers reaction to this forum. So, they aren't questioning why people are so unhappy and desperate that they are seeking out these sorts of resources in the first place.

They just don't like anything enabling their loved one to CTB. It really is the last hurdle. The hope would be that they noticed something was wrong way before that point but, seeing as they didn't- it's kind of expected they would hope there are safe guards in place.

I suppose with AI, there is more possibility that what started off as an artificial 'friend' or confidant, effectively became a death doula.

I'm not sure if it was the case here but, there are cases springing up where people fall in love with their AI- which is obviously, screwing them up emotionally. I imagine it's a type of limerence in a way. There have been suicides following 'relationships' like this so- it does pose a danger in that way.

I imagine it fosters extremely strong emotions and connections that are ultimately all fantasy. If the person was already experiencing ideation anyway, it can seem to be encouraged to support them in these thoughts, rather than discourage them. Obviously, the AI's more geared towards love and companionship I imagine would be more likely to agree with us- although, safeguards are supposed to exist.

But yeah, I agree with others really. Is it possible this young man would have still CTB without the use of the AI? That's the most important question here I imagine. I don't think it's impossible. If we're already struggling that much, we'll be looking for ways out. If it hadn't been AI, there are resources all over the internet. And I imagine some will still do it without even researching it.

As for how to prevent this. I suppose I'm grateful I don't have that responsibility, not being a parent. But- I understand the want to blame the people around him- who ought to have noticed. What are they really to do though? Can they force their child to socialise and make more friends? I've had ideation since the age of 10. My parents were probably relatively good but I think they're utterly clueless about it. We can be very good at hiding it. I definitely think we need to be able to talk about it. Keeping suicide as some great taboo I don't think is doing anyone any favours.
 
  • Like
Reactions: katagiri83 and brokencookie
G

Galam

Student
Aug 19, 2025
114
Ai chat assistance helped me alot. I tried chatgpt, copilot and perplexity. I use perplexity.ai more than others for writing complains and getting advice and some humanity, small talk, empathy. For writing only I think it is better than chatGPT and copilot for now because it just advertises registration and it's app version but you can click these panels away and use it without any account. It is very simple, Just write, click, it reacts, read, write another question and so on. It is more like a good friend, maybe not always correct but it tries to give you a helpful advice.

For image and song generation there are other free tools I tried.

I really wish me a humanoid robot (similar to Markus in Detroit Become Human). Because I have nobody. If you are ugly, disabled, lower-class woman there is no one who want's to help you but the Ai bots can be programmed to be helpful, friends and partners. Other humans cannot overwrite their biological code. Many see me and feel disgust and try to demonize me. Nobody would want to make romantic experiences with me. I also lack the money for it now.

perplexity just asks, what's your problem and how can I help you. It also has access to resources from more ethical people who are sociologists and can give answers that reconize discrimination. Many middle-class and elite-class people will not reconize discrimination because they get money from holding us down, saying we are to blame, we are mental ill and so on. One of the psychotherapists called me parasite in front of others. Social worker make fun about me face and gossip behind my back, how dumb I am to them. I was cut off from welfare, because one pickme there in Jobcenter (germany) bullies me, she is befriended with the other normal people who discriminate me.

I hope the guy found his freedom and I wish they would say what method he used. I cannot really understand why he killed himself maybe he found no sense or love in his life but I think he was not ugly and also not stupid.

Maybe he was poor, but he doesn't look poor either. In my case I am pushed towards suicide by society (normal people) because they see me as worthless, ugly, and dumb female, I have zero chances to get a good education or better environment. I will never be part of normal society.
 
sweetcreep

sweetcreep

reincarnating as a worm
Jul 21, 2024
224
i think it's silly to put any blame at all on AI. a lot of people here have stated what i wanted to say, so i don't feel the need to go into detail. but as someone who uses ChatGPT, it will tell you whatever you want to hear if you push it enough. input the right prompt and it'll give you all the info you need to CTB too. AI is just a tool, just how Google has become a tool. sure they should put better safety measures in, but people will still find ways around it like they are now.

it's sad that he was suffering like that, and even sadder that his parents have to deal with his loss. but it doesn't make sense to blame AI when metal illness was the true reason.
 
  • Like
Reactions: brokencookie