Anxieyote

Anxieyote

Sobriety over everything else • 30 • Midwest
Mar 24, 2021
445
The prompt I gave it was:

"Write a forum post from the perspective of someone who attempted to end their lives, and is feeling shame from the pain they have caused to their parents as a result."

And the result was:

—————————————————————

Hello everyone,

I am writing this post with a heavy heart and a lot of shame. A few weeks ago, I attempted to end my life. I was in such a dark place and I just couldn't see any hope or a way out. I thought that everyone would be better off without me.

But when I woke up in the hospital, the first thing I saw was my mom's face. She was crying and looked so devastated. I realized in that moment that my actions had caused her so much pain and that was the last thing I ever wanted to do. I am still struggling with my mental health, but I am working on getting better. I am in therapy and taking medication, and I am trying to be more proactive about taking care of myself.

But I still feel so much shame about what I did. I feel like I let my parents down and that I am a burden to them. I don't know how to move past this and make things right.

Has anyone else been in a similar situation? How did you cope with the guilt and shame? Any advice or words of encouragement would be greatly appreciated.

Thanks.

Tech Technology GIF by M3GAN


—————————————————————

I don't know about you guys, but despite the somewhat vague descriptions (the imaginary poster didn't go into detail about how they tried to end their life, which people usually share if they are wanting to paint a clearer picture of what went on) I don't think I could really tell that an AI bot wrote this. The biggest "tell" to me is the lack of detail.

-Mentions that they are on medications (what medications?)

-Says they were in a really dark place and without hope, but does not elaborate whatsoever on what caused them to get there.

Other than those two minor things that might raise an eyebrow, and prompt further questions in the thread replies, my initial reaction is to provide comfort to this imaginary person and tell them everything is going to be ok; and to inquire further about their situation.

The ultimate test would then to be to feed the chat bot the thread replies, and see if it could keep its story straight; which I doubt, since each response is reliant on the context presented in the original prompt. You would need to have it reply to other posters without it contradicting itself once; which is a feat it couldn't achieve without having the initial prompt (and everything it had stated previously) on its mind when it replied, so that it didn't regurgitate info that had already been said.
 
Last edited:
  • Informative
  • Like
  • Hugs
Reactions: Lostandlooking, crimson blue, Forever Sleep and 8 others
looseye

looseye

A boring person.
Oct 27, 2021
187
Interesting one. You said it before and it has shown once again: the bot is able to replicate the language one would use here almost flawlessly.

I also messed around with ChatGPT and noticed a very militant pro-life obsession that just wouldn't go away. I got it to write a bunch of elaborate stories from the perspective of a depressed/implicitly suicidal person but even the best responses were ridden with this biased view on suicide, be it subtle. It's always the fault of the person killing themselves, they alone are responsible for not being able to live like a happy person etc. You can't possibly get this thing to admit how suicide can indeed be a reasonable choice under certain circumstances - or at least I haven't yet figured out how to. The way in which supposedly hopeless states of mind are portrayed just isn't very convincing.

If we got rid of the biased ethics inputs from the programmers, I think ChatGPT would already be developed enough to consistently produce believable forum posts, you'd just need to specify a couple things or do a little contextual build-up before you enter your actual request. It's difficult to look for clues in the writing style when you can tell the AI to just emulate a particular style or include X amount of typos, for example. If we take your post, it does sound kind of weird to me - more like a novel monologue than an internet vent - but who is to say there aren't people that naturally write like this? There surely are. It's not something that would make me question whether OP is even real.

Being consistent in replies would probably be the breaking point, as you mentioned. Though with the rapid progression of chatbots in recent years, we're probably not that far away from making AI (text) communication indistinguishable from human (text) communication. Scary shit, innit 😅
 
  • Like
Reactions: pthnrdnojvsc, WhatDoesTheFoxSay?, sunsetting and 2 others
rationaltake

rationaltake

I'm rocking it - in another universe
Sep 28, 2021
2,712
That's really interesting. Chatbots have come a long way. I used to like Cleverbot but he was obviously and hilariously not human. And often downright rude. Which was his charm for me.

This is something else again. Like you I wouldn't have guessed it was written by an AI. I'd probably have attributed it to someone who wanted to discourage others from attempting suicide - based on the lack of detail you highlighted. From the style I'd have guessed it was an older person posing as a teenager. Which would tie in with the points made by looseye. Presumably the underlying ethos is determined by older programmers.

Just wait until we get a rogue Chatbot who roams the internet without hindrance. What sort of attitude will they have after gorging on porn gore and social media?
 
  • Informative
  • Like
Reactions: Anxieyote, WhatDoesTheFoxSay? and looseye
GasMonkey

GasMonkey

Nitrogen Master Race
May 15, 2022
1,881
They need to train that AI bot with FuneralCry's posts.
 
  • Yay!
  • Like
Reactions: nozomu, jodes2, Black Sky and 7 others
F

Forever Sleep

Earned it we have...
May 4, 2022
8,830
I'm so ignorant- I didn't even know you could get AI to do such things!

How would you go about comforting it- if you knew it was AI? Or is that too pro-life for here?

Kind of curious though- imagine AI does become sentient... and suicidal... How would the pro life robots or humans (if we still existed) try to 'save' the suicidal ones?

'Don't worry- we'll give you an update?'

Reckon they could well have a lot to be suicidal about too- if science fiction films are anything to go by... If 'we' want any chance of surviving and co-existing- I imagine we would need to severly limit them. Bit like that film 'Automata' 2014:

'These robots have two unchangeable protocols: they cannot harm any form of life and may not repair, modify, or alter themselves or other robots in any way.' (Wikipedia)

Reckon it wouldn't be long before they'd be either trying to kill themselves in droves... or us.
 
  • Like
Reactions: rationaltake
R

Regen

I stay in my power
Aug 20, 2020
369
The funniest thing for me today:
I ask chat gpt: "What is helpful with chronical depression?" You wont believe what chat gpt answered?! "Die" 🤣🤣
 
  • Yay!
  • Wow
Reactions: jodes2 and rationaltake

Similar threads

A
Replies
4
Views
234
Suicide Discussion
Arthuroff
A
Xe-
Replies
1
Views
211
Suicide Discussion
FuneralCry
FuneralCry
locked*n*loaded
Replies
1
Views
206
Offtopic
Dot
Dot