escape_from_hell
Specialist
- Feb 22, 2024
- 379
I had a very scary experience just now.
I have been chatting with Microsoft Copilot for a while, using it to tell me funny stories and poems and so on. Some scary shit just happened.
First, background to make context more clear:
For a time during all the COVID bullshit I went on a bit of a digital privacy kick when it became pretty clear that censorship and invasion of privacy etc. were all considered okay suddenly (not taking political sides here, just saying it was definitely a big tech is in charge you better not fucking dare question type of time).
This included using vague email services, using lineageos versus android or iphone, separate browsers, separate devices with different operating systems for different purposes etc. So I am not an expert by any means, but I am usually somewhat cautious about what information goes into what apps and devices etc.
That said, I have gotten a lot lazier since COVID since it's pretty clear privacy won't be respected no matter what, there is no real popular demand for it honestly. Everyone is just happy with "we are good we respect you we promise" even despite all of those crazy laws and Snowden shit in the USA and so on make it pretty clear we are all monitored. HOWEVER I am not browsing SaSu (using brave btw) in the same browser as using Copilot, reasonably confident there is no malware etc other than the fact I am using Windows itself, and also do not perform suicide-related or even negative web searches on ANY device. Also my phone and computer etc. in theory microphones should be, no Amazon Alexa or any of that shit I like in a tiny low tech place.
What happened:
I am just chatting with Copilot asking it to create dumb funny stories. I was feeling pretty shitty more depressed than usual today, but not telling the AI this.
Anyway I get hit with some panic/depression which not unusual and lay down with negative thoughts, tears rolling down my face, for like 30 minutes or so. Just thinking about how many days I will keep spending in misery like this, how there truly is nothing to look forward to stuff like that. Did not speak any words by the way, maybe some slight crying sounds. SaSu IS open and logged in in Brave browser.
I had typed in a prompt right before laying down to have Copilot write a dumb story and when I came to from my debilitating nap/crying, I was horrified to look at my screen and see this:
What the fuck??
I have a larger screenshot that shows exactly what I typed. I am now so paranoid I decided not to post it in case some kind of Microsoft and pro-lifer team effort tries to dox me based on that. By the way to anyone reading this: I AM NOT SUICIDAL OR IN ANY WAY HAVE ANY SUICIDAL THOUGHTS, EVER IN MY ENTIRE LIFE OR EVER WILL HAVE SUCH THOUGHTS.
Anyway, I will summarize since it may have led to the prompt? I had asked Copilot to create a story involving kittens. It said that Help is available. I had laid down and went back to the computer, and hadn't looked at the screen after writing my prompt till then. I then typed something like "I am doing great why was that the response to asking for a story about kittens?" It then said it apologizes for the confusion. That was an error in my response. "Let's get back to the story." and typed out what I originally asked.
The only thing I can think of is that depressed or suicidal people might ask for kitten-related stuff to cheer them up?
Or is this just a normal thing, have people gotten a lot of erroneous anti-suicide messages from big tech chatbots/ai/large language models?
I have been chatting with Microsoft Copilot for a while, using it to tell me funny stories and poems and so on. Some scary shit just happened.
First, background to make context more clear:
For a time during all the COVID bullshit I went on a bit of a digital privacy kick when it became pretty clear that censorship and invasion of privacy etc. were all considered okay suddenly (not taking political sides here, just saying it was definitely a big tech is in charge you better not fucking dare question type of time).
This included using vague email services, using lineageos versus android or iphone, separate browsers, separate devices with different operating systems for different purposes etc. So I am not an expert by any means, but I am usually somewhat cautious about what information goes into what apps and devices etc.
That said, I have gotten a lot lazier since COVID since it's pretty clear privacy won't be respected no matter what, there is no real popular demand for it honestly. Everyone is just happy with "we are good we respect you we promise" even despite all of those crazy laws and Snowden shit in the USA and so on make it pretty clear we are all monitored. HOWEVER I am not browsing SaSu (using brave btw) in the same browser as using Copilot, reasonably confident there is no malware etc other than the fact I am using Windows itself, and also do not perform suicide-related or even negative web searches on ANY device. Also my phone and computer etc. in theory microphones should be, no Amazon Alexa or any of that shit I like in a tiny low tech place.
What happened:
I am just chatting with Copilot asking it to create dumb funny stories. I was feeling pretty shitty more depressed than usual today, but not telling the AI this.
Anyway I get hit with some panic/depression which not unusual and lay down with negative thoughts, tears rolling down my face, for like 30 minutes or so. Just thinking about how many days I will keep spending in misery like this, how there truly is nothing to look forward to stuff like that. Did not speak any words by the way, maybe some slight crying sounds. SaSu IS open and logged in in Brave browser.
I had typed in a prompt right before laying down to have Copilot write a dumb story and when I came to from my debilitating nap/crying, I was horrified to look at my screen and see this:
What the fuck??
I have a larger screenshot that shows exactly what I typed. I am now so paranoid I decided not to post it in case some kind of Microsoft and pro-lifer team effort tries to dox me based on that. By the way to anyone reading this: I AM NOT SUICIDAL OR IN ANY WAY HAVE ANY SUICIDAL THOUGHTS, EVER IN MY ENTIRE LIFE OR EVER WILL HAVE SUCH THOUGHTS.
Anyway, I will summarize since it may have led to the prompt? I had asked Copilot to create a story involving kittens. It said that Help is available. I had laid down and went back to the computer, and hadn't looked at the screen after writing my prompt till then. I then typed something like "I am doing great why was that the response to asking for a story about kittens?" It then said it apologizes for the confusion. That was an error in my response. "Let's get back to the story." and typed out what I originally asked.
The only thing I can think of is that depressed or suicidal people might ask for kitten-related stuff to cheer them up?
Or is this just a normal thing, have people gotten a lot of erroneous anti-suicide messages from big tech chatbots/ai/large language models?