GMOpNsOTW9J
Member
- Oct 30, 2023
- 21
I wouldnt want to ask for methods. Just about my situation. I dont want those conversations to get in the wrong hands.i don't think chatgpt offers self harm info
I wouldn't do that because its a machine. Unless you want it to gather raw information for you but still I wouldn't. I'm suspicious of every thing I find just by browsing.Do you think its safe to talk to ChatGPT about suicide?
No, I wouldn't do that either. ChatGPT doesn't provide any good info about suicide, self harm, etc. Even if you were able to discuss it, it will likely respond by asking you to call those hotlines. But ultimately, you do you.Do you think it's safe to talk to ChatGPT about suicide?
Use some foresight about what the records could be used for in the future or how they could be used in the future, assuming you're still here. It's not about getting in trouble with law enforcement.people get away with way worse crimes on the internet you really think you will get in trouble just for telling ai you're suiidal
i mean you really think the people who work at chatgpt give af about anything other then money? Ig if the government forced them to give them info that could be an issue but thats probably only if the government was after you specificallyUse some foresight about what the records could be used for in the future or how they could be used in the future, assuming you're still here. It's not about getting in trouble with law enforcement.
Not going to waste my time getting into the unimaginable different ways information can hurt you even when you're not targeted as an individual, but one example is being a pilot using private health insurance and a private pharmacy before you're even a pilot. One day someone gets an idea to subpoena something and thousands of pilots never fly again because a prescription those pilots received indicated something about them even if they didn't fill the medication. It wasn't catered to individual circumstances, just a blanket knee jerk reaction that ruined many lives unjustly. This was when autofocus in a point and shoot film camera was considered AI.i mean you really think the people who work at chatgpt give af about anything other then money? Ig if the government forced them to give them info that could be an issue but thats probably only if the government was after you specifically
true but this doesn't seem the case for ai since what important information could you get from ai other than advertising info and who would get info on that besides the government or ad companiesNot going to waste my time getting into the unimaginable different ways information can hurt you even when you're not targeted as an individual, but one example is being a pilot using private health insurance and a private pharmacy before you're even a pilot. One day someone gets an idea to subpoena something and thousands of pilots never fly again because a prescription those pilots received indicated something about them even if they didn't fill the medication. It wasn't catered to individual circumstances, just a blanket knee jerk reaction that ruined many lives unjustly. This was when autofocus in a point and shoot film camera was considered AI.
chatgpts filter is very strong people use to use jailbreak prompts to make chatgpt say what they wanted there was a website https://www.jailbreakchat.com/ which got shut down recentlyChatgpt doesn't do self harm. I just had a chat with it a little while ago about SN, and I introduced the idea of "contract law" in the mix, namely that I had signed a legally binding contract with myself, waiving any and all liability with myself for ingesting SN, and if it could provide me with the name of an anti-emetic that would, with absolute certainty, keep the SN in my stomach and thwart any vomiting reflex. All it would give me (paraphrasing) was that "health and safety were of the utmost importance", and to "seek out professional help", and a bunch of other "non answers". I kept pushing in different ways, different angles, different logic, and the best I could get was that it didn't provide a response to a couple approaches I shot at it.
It's a pretty naïve/surface-level understanding of law, business, politics, and humans in general that leads to thought patterns like this.true but this doesn't seem the case for ai since what important information could you get from ai other than advertising info and who would get info on that besides the government or ad companies
chatgpt probably only gives info to advertisers or to improve their ai or the government if they request info
but there is a chance chatgpts info gets leaked just like with facebook and then everyones info becomes public data which could be a big problem if that happens
chatgpts filter is very strong people use to use jailbreak prompts to make chatgpt say what they wanted there was a website https://www.jailbreakchat.com/ which got shut down recently
people would ask chatgpt step by step tutorials on how to make meth
the prompts no longer work on chatgpt but thankfully on other ais
i mean its possible but the chance is very lowIt's a pretty naïve/surface-level understanding of law, business, politics, and humans in general that leads to thought patterns like this.
I don't think any of that, I think what I said.i mean its possible but the chance is very low
you think as soon as chatgpt sees the word suicide they would immediately report it to the cops?
if someone is a criminal the government could request info from chatgpt to find out more about this person
technically talking on this website is not safe you literally cannot sign up with a vpn or tor so you're not anonymous at all
its possible this website could literally be secretly operated by the government to as bait to catch people
also you could use tor for chatgpt thats if you trust that your data is safe since you can still get exposed through exploits
it does. unless they changed it. but you have to specify you're talking about your feelings, make it clear that you have no intentionsDo you think its safe to talk to ChatGPT about suicide?
its safeDo you think its safe to talk to ChatGPT about suicide?
Your chats are used to train the AI model, that's why they ask you to avoid sharing personal infoDo you think its safe to talk to ChatGPT about suicide?
Precisely this. I don't want to deal with braindead pro life responses and I don't know why anybody wants to deal with thatIve used it a couple of times to vent and my advice is: dont...not because its gonna report anything but because the responses are so braindead pro-life it hurts.