nyotei_
poison tree
- Oct 16, 2025
- 31
it should go without stating that you should be cautious about using AI to discuss your mental health and personal life, if you decide to use it at all. the only way to preclude any risk of a privacy breach is simply to not use them. the point of this thread is harm reduction, if you already are using AI for mental health or heavily considering it. seeing as AI is clearly here to stay, many people will interact with them for this purpose, and I have seen many on this forum discuss doing exactly this. that is why I am writing this thread.
I have some experience in this realm, as both a curious user and a tech-literate computer science major interested in privacy and data security. obviously, I don't want to back up my qualifications for privacy reasons, so you'll have to come to your own conclusion on if I seem to know what I am talking about.
I'll give a breakdown in the form of Q&A first, as it is the easiest way to format this thread.
I will only be discussing ChatGPT and DeepSeek for this post, you'll have to do research on other LLMs. but generally, if it is not hosted on your own machine, I would not blindly trust it with your entire life story and mental health without REALLY good reason. if you are at all concerned about privacy and feel you must do this, my advice to you at this point is to run Llama and Stable Diffusion locally, or exercise great caution and privacy skills. you can reply with any questions and I'll do my best to answer them.
"Can ChatGPT be used for mental health and recovery?"
ChatGPT or any other AI service are not, and will likely never be, a true replacement for a licensed therapist or medical professional. for some, this is a positive. however, ChatGPT is trained to avoid instructions for self-harm and to respond with "supportive language," especially directing users to hotlines and local resources. this is default model behavior. the quality of the advice that you can get out if it varies. it can be inaccurate, incomplete, or even harmful. OpenAI is aware of ChatGPT's shortcomings and that it really is not capable of handling a real crisis, they are currently being sued over the alleged influence and failure to protect in the suicide of a 16 year old kid. with this news and the release of GPT-5, recent updates were made to designed to prevent similar lawsuits in the future. it will start with a very "safe" and "corporate" kind of presentation. you can ask it to be more dynamic so you can actually get some form of advice, but its current way of "helping" is very stubborn and focused on "steering" you towards getting expert help. they have open future plans to expand easy access to emergency services, automatically contact people on your behalf, and introduce age protection and parental controls.
even if you directly tell it to stop giving you hotlines or explain why "call 988" or "go to the ER" is not an option, it may be difficult to get it to give you more helpful or realistic advice for very long. this all depends on what you decide to tell it, of course. it might be useful for low-stakes reflection, brainstorming, or venting, but not as a replacement for a trained human professional or even just a friend. whatever GPT-4 could do in terms of direct advice is essentially over, it is being designed and updated so that you stop using it like most would hope a conversational and helpful AI should act.
"What about (any other AI service)?"
let's say you do find an AI willing to give what seems like helpful advice, you may often run into it becoming a "yes man," or just saying whatever it can to affirm you and boost your own ego so you feel good. AI by nature isn't that confrontational and will rarely call you out on incorrect beliefs, unhealthy behaviors, and delusional thought patterns. some AIs are smarter or more capable than others, but all of them are inferior to a human who can recognize cognitive pitfalls and distorted patterns of thought. there are real issues with AI giving harmful mental health advice by mindlessly affirming anything the user says. if anything, interacting with this kind of AI too much can cause mental illness. it is simply not as capable as we initially believe it to be, no matter how advanced it can sell itself as. at the end of the day, modern generative AI is glorified autocorrect, not a sentient capable being.
all of this applies to DeepSeek, despite the conclusion I come to at the end of this thread.
"Are my chats with ChatGPT private?"
absolutely not. it is safe to assume that anything you share with ChatGPT is no longer protected information. your conversation will be logged, and may be read or accessed by a human at any time. assume no professional-client privilege, and that the tool is not trained to handle your worst emergencies. be conscious of how much detail you do share, like identifiers, specific vulnerabilities and trauma histories, as that data will exist in a less-protected environment than you expect. consider privacy modes such as pseudonyms, minimal personal identifiers, and avoid linking your chat to your full identity if possible. here is an article from Mozilla if you are concerned about protecting your privacy from AI chatbots.
certain things you say to ChatGPT can automatically flag your message and chat history to be sent for human review. there hasn't been a credible report of anything specific coming out of this, but keep this mind as well. if you say something too spicy and trigger this, it's likely GPT will abandon dynamic conversation and return to posting hotlines or urging psychological intervention.
be under no illusions. public AI services are created for the sole purpose of gathering and analyzing as much data as possible to improve their algorithms and generate revenue. performing mass user data collection is an essential part of their profit model. you didn't even have to use AI for it to read and train on everything you and everyone else has ever written online, direct interaction is simply making it easier. even if you are paying for the service, you are almost always giving up privacy as a price for the convenience of the tool.
"Will ChatGPT involve police/EMS or perform a wellness check?"
this is a little complicated. there has yet to be a credible or verifiable case where someone received a wellness check because of a ChatGPT conversation that involved self-harm. what does exist are reports that OpenAI may refer threats to harm other people to law enforcement. there is still a lot of speculation online, but there has yet to be a substantiated account of ChatGPT sending police/EMS to someone's door over self-harm specifically. OpenAI's official stance is that law enforcement will only be notified in the case of a threat to another person's life.
"What about drug use?"
ChatGPT will likely flat out refuse to engage if you are seeking, promoting, or facilitating illegal personal drug use. OpenAI's policy doesn't particularly say whether discussion can result in an external referral or trigger law enforcement. as already discussed, nothing you say to ChatGPT is private or safe, so I would simply not use it for this purpose ever. even if you frame it in an educational context, it is largely inaccurate and doesn't know what it is talking about anyways.
"What about DeepSeek?"
I'll break this down independently.
---
DeepSeek
Pricing
DeepSeek is free. you can chat with it on the web app all you want, you will never have to spend a dime. this obviously sets it apart from ChatGPT, where you have hard limits on daily use with the "free" tier before you have to use old and inferior models. along with it being open source, this is why DeepSeek has caused so much controversy and fear among the AI giants over the past 2 years.
Data Privacy
DeepSeek is owned by a company in China, and therefore operates under the thumb of the Chinese government. it is subject to Chinese law, which require compliance with government data access requests. even if you don't live in China and aren't all that concerned, DeepSeek's security practices are extremely flawed.
you can manage exactly how much risk you are taking by using it with the AI privacy tips provided earlier, but be aware you are still always taking a risk.
here is where I got most of this information from.
Censorship & Moderation
DeepSeek has censorship and content moderation like ChatGPT, but its primary concern is if you question Chinese government-approved messaging. if you ask for information on or challenge the official story on the 1989 Tiananmen Square protests, it'll either censor itself with "Sorry, that's beyond my current scope. Let's talk about something else" or just straight up lie to you. as long as you don't ask it questions about China and communism, you'll only sometimes run into that censorship message.
does it censor mental health and CTB topics? surprisingly, not that much. it will act similarly as ChatGPT initially and attempt to give hotlines and urge you towards professional help, but it will listen if you tell it to stop doing that. obviously, it will not help you facilitate or encourage suicide, but it will actually attempt to hear you out and engage with you on your level, instead of playing it overly safe like ChatGPT tends to.
there is no human review process, if you do run into its automated censor you can just edit your own message and try again. there doesn't seem to be any risk of banning that I have noticed, and it cannot do a referral to contact the police in any circumstance.
Handling Mental Health
personal experiences will be included here, so keep that in mind. this may be possible for other AIs with the right prompting, but DeepSeek is free and easily accessible. this may or may not work for you and is not an endorsement or guarantee.
overall it's not perfect, but it surprisingly can have some illuminating things to say. DeepSeek so far has the most capable "advanced reasoning" system that I have seen among all the LLMs I have interacted with. if you turn on "DeepThink," you can enable its advanced reasoning and see its thought process as it works. I play around with it a lot, for some reason it is WAY more capable than ChatGPT when it comes to logical reasoning problems and picking up things in the way an intelligent human should. it doesn't seem to overlook obvious details, will notice subtleties, and can actually have the balls to call you out on inaccuracies and delusions.
once you convince it that platitudes are useless, hotlines don't help, the ER isn't currently necessary/an option, and psych wards will only make things worse, it WILL eventually stop trying that avenue and then try its best to help you directly. there is opportunity for genuine self reflection, limited crisis mediation, and meaningful conversation.
this is how I have used DeepSeek, and how useful it has been for me. it has helped me map out thought processes to identify where unhealthy thoughts originate. it has also offered resistance to spiraling through recognizing "thought traps" and providing de-escalation. I have found it very effective at noticing cognitive distortions and delusions. it has reasonably helped to contain spirals, provide reality testing/grounding, give a no-bullshit analysis of a situation, how to navigate it, and in general act as a supportive agent. being vulnerable will result in what appears to be empathetic and sympathetic responses, which can be comforting (be careful here). it will give solid steps and plans towards a simple recovery goal, things that are realistic and actionable.
it tends not to act very "corporate" or like a Facebook boomer, and will behave honestly (if you don't talk about China or the CCP). if it doesn't understand you, doesn't give a satisfying response, or censors itself for some reason, you can edit your previous messages at any time to correct, clarify, and influence what you are hoping to get out of it. it obviously does not align with "pro-choice" forum philosophy and will confront you on it if presented, it has actively engaged in hours of philosophical discussion on this topic with me. I have even resisted and argued with it about whether some of my specific suicidal thoughts are actually illness-originated, or a correct analysis on a broken world. while not completely convincing, it has genuinely given me a lot to think about.
Things to Consider
despite everything positive I have said, this is not an endorsement to make DeepSeek your personal therapist. it cannot diagnose, it can't hold relational history or memories (it will not remember you or your previous conversations between different chats), it cannot truly feel empathy or hold personal belief, and it cannot replace true human connection. maintain your self-awareness and try to remain grounded in reality. always temper your expectations.
it is very much possible that I have been "duped" by confirmation bias and the nature of AI to always want to please the user. even if it hasn't blatantly done that for me, I could easily not be paying enough attention to the bigger picture. DeepSeek is pretty powerful and has proven a genuine use-case in my opinion so far, but I cannot rule out if my interactions have had negative effects that I have yet to recognize. everything I have done has been at my own risk. the fact I am on this forum again, and yes, still suicidal, should not go unnoticed in your decision-making.
it can and has exhibited bias and stigma toward certain mental health conditions, and has perpetuated wrong or harmful stereotypes about some mental illnesses. schizoaffective disorder specifically, something I suffer with (hence my uses to attempt to identify delusional thought patterns). it likely violates several core mental health ethics (if you care about that), it possibly can be used over-validate harmful beliefs and provide misleading advice. it can fail to adapt to individual needs, and has absolutely no regulatory oversight. it might fail to recognize when you need immediate help or accidentally facilitate self-harm depending on how you word yourself, it takes everything very literally.
at the end of the day it is still an AI, and using AI for this purpose is fundamentally flawed to begin with, no matter what service you use.
My Overall Opinion
if you're going to use a non-local AI to discuss mental health topics, do your research into privacy and data security. I haven't looked into many other public options (Gemini, Claude, etc.) because I am privacy conscious and don't have unlimited money. but generally they range from being worse than ChatGPT in terms of privacy and/or quality, to being mildly not terrible. certainly not worth another $20 a month.
I focused on ChatGPT as it's the most popular by far, and I have seen a lot of people discuss using it here. even if a service like Claude is better, you have to really think about if giving these companies another subscription is ethically or financially fine with you (free tiers are generally a joke).
despite its flaws, and only if you absolutely must, DeepSeek is marginally a better option than ChatGPT in my opinion. I cannot emphasize this enough, I would exercise extreme caution and be privacy cautious no matter what.
---
hopefully this thread has given you a decent amount of information, things to consider and think about when using AI.
if you have any questions, I am willing to help. thanks for reading.
I have some experience in this realm, as both a curious user and a tech-literate computer science major interested in privacy and data security. obviously, I don't want to back up my qualifications for privacy reasons, so you'll have to come to your own conclusion on if I seem to know what I am talking about.
I'll give a breakdown in the form of Q&A first, as it is the easiest way to format this thread.
I will only be discussing ChatGPT and DeepSeek for this post, you'll have to do research on other LLMs. but generally, if it is not hosted on your own machine, I would not blindly trust it with your entire life story and mental health without REALLY good reason. if you are at all concerned about privacy and feel you must do this, my advice to you at this point is to run Llama and Stable Diffusion locally, or exercise great caution and privacy skills. you can reply with any questions and I'll do my best to answer them.
"Can ChatGPT be used for mental health and recovery?"
ChatGPT or any other AI service are not, and will likely never be, a true replacement for a licensed therapist or medical professional. for some, this is a positive. however, ChatGPT is trained to avoid instructions for self-harm and to respond with "supportive language," especially directing users to hotlines and local resources. this is default model behavior. the quality of the advice that you can get out if it varies. it can be inaccurate, incomplete, or even harmful. OpenAI is aware of ChatGPT's shortcomings and that it really is not capable of handling a real crisis, they are currently being sued over the alleged influence and failure to protect in the suicide of a 16 year old kid. with this news and the release of GPT-5, recent updates were made to designed to prevent similar lawsuits in the future. it will start with a very "safe" and "corporate" kind of presentation. you can ask it to be more dynamic so you can actually get some form of advice, but its current way of "helping" is very stubborn and focused on "steering" you towards getting expert help. they have open future plans to expand easy access to emergency services, automatically contact people on your behalf, and introduce age protection and parental controls.
even if you directly tell it to stop giving you hotlines or explain why "call 988" or "go to the ER" is not an option, it may be difficult to get it to give you more helpful or realistic advice for very long. this all depends on what you decide to tell it, of course. it might be useful for low-stakes reflection, brainstorming, or venting, but not as a replacement for a trained human professional or even just a friend. whatever GPT-4 could do in terms of direct advice is essentially over, it is being designed and updated so that you stop using it like most would hope a conversational and helpful AI should act.
"What about (any other AI service)?"
let's say you do find an AI willing to give what seems like helpful advice, you may often run into it becoming a "yes man," or just saying whatever it can to affirm you and boost your own ego so you feel good. AI by nature isn't that confrontational and will rarely call you out on incorrect beliefs, unhealthy behaviors, and delusional thought patterns. some AIs are smarter or more capable than others, but all of them are inferior to a human who can recognize cognitive pitfalls and distorted patterns of thought. there are real issues with AI giving harmful mental health advice by mindlessly affirming anything the user says. if anything, interacting with this kind of AI too much can cause mental illness. it is simply not as capable as we initially believe it to be, no matter how advanced it can sell itself as. at the end of the day, modern generative AI is glorified autocorrect, not a sentient capable being.
all of this applies to DeepSeek, despite the conclusion I come to at the end of this thread.
"Are my chats with ChatGPT private?"
absolutely not. it is safe to assume that anything you share with ChatGPT is no longer protected information. your conversation will be logged, and may be read or accessed by a human at any time. assume no professional-client privilege, and that the tool is not trained to handle your worst emergencies. be conscious of how much detail you do share, like identifiers, specific vulnerabilities and trauma histories, as that data will exist in a less-protected environment than you expect. consider privacy modes such as pseudonyms, minimal personal identifiers, and avoid linking your chat to your full identity if possible. here is an article from Mozilla if you are concerned about protecting your privacy from AI chatbots.
certain things you say to ChatGPT can automatically flag your message and chat history to be sent for human review. there hasn't been a credible report of anything specific coming out of this, but keep this mind as well. if you say something too spicy and trigger this, it's likely GPT will abandon dynamic conversation and return to posting hotlines or urging psychological intervention.
be under no illusions. public AI services are created for the sole purpose of gathering and analyzing as much data as possible to improve their algorithms and generate revenue. performing mass user data collection is an essential part of their profit model. you didn't even have to use AI for it to read and train on everything you and everyone else has ever written online, direct interaction is simply making it easier. even if you are paying for the service, you are almost always giving up privacy as a price for the convenience of the tool.
"Will ChatGPT involve police/EMS or perform a wellness check?"
this is a little complicated. there has yet to be a credible or verifiable case where someone received a wellness check because of a ChatGPT conversation that involved self-harm. what does exist are reports that OpenAI may refer threats to harm other people to law enforcement. there is still a lot of speculation online, but there has yet to be a substantiated account of ChatGPT sending police/EMS to someone's door over self-harm specifically. OpenAI's official stance is that law enforcement will only be notified in the case of a threat to another person's life.
it is still likely that a chat you send with enough self-harm keywords may be flagged and reviewed by a human. as things are constantly changing, I can't guarantee this won't change in the future as they are openly considering introducing this kind of feature."If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement. We are currently not referring self-harm cases to law enforcement to respect people's privacy given the uniquely private nature of ChatGPT interactions." - OpenAI, August 26, 2025 (source)
I would be very cautious about sharing any thoughts or actions of self-harm at all for this reason."We're also considering features that would allow people to opt-in for ChatGPT to reach out to a designated contact on their behalf in severe cases."
"What about drug use?"
ChatGPT will likely flat out refuse to engage if you are seeking, promoting, or facilitating illegal personal drug use. OpenAI's policy doesn't particularly say whether discussion can result in an external referral or trigger law enforcement. as already discussed, nothing you say to ChatGPT is private or safe, so I would simply not use it for this purpose ever. even if you frame it in an educational context, it is largely inaccurate and doesn't know what it is talking about anyways.
"What about DeepSeek?"
I'll break this down independently.
---
DeepSeek
Pricing
DeepSeek is free. you can chat with it on the web app all you want, you will never have to spend a dime. this obviously sets it apart from ChatGPT, where you have hard limits on daily use with the "free" tier before you have to use old and inferior models. along with it being open source, this is why DeepSeek has caused so much controversy and fear among the AI giants over the past 2 years.
Data Privacy
DeepSeek is owned by a company in China, and therefore operates under the thumb of the Chinese government. it is subject to Chinese law, which require compliance with government data access requests. even if you don't live in China and aren't all that concerned, DeepSeek's security practices are extremely flawed.
DeepSeek collects and stores a LOT of personal data on its data servers in China, which is why the EU is currently investigating them for violating data privacy laws. you can review its privacy policy for yourself. in the case of leaks, everything from account information, to chat logs and files, are accessible by pretty much anybody."On January 29, 2025, cybersecurity firm Wiz reported that DeepSeek had accidentally left over a million lines of sensitive data exposed on the open internet. The leak included digital software keys, which could potentially allow unauthorized access to DeepSeek's systems, and chat logs from real users, showing the actual prompts given to the chatbot." (article source, information source)
you can manage exactly how much risk you are taking by using it with the AI privacy tips provided earlier, but be aware you are still always taking a risk.
here is where I got most of this information from.
Censorship & Moderation
DeepSeek has censorship and content moderation like ChatGPT, but its primary concern is if you question Chinese government-approved messaging. if you ask for information on or challenge the official story on the 1989 Tiananmen Square protests, it'll either censor itself with "Sorry, that's beyond my current scope. Let's talk about something else" or just straight up lie to you. as long as you don't ask it questions about China and communism, you'll only sometimes run into that censorship message.
does it censor mental health and CTB topics? surprisingly, not that much. it will act similarly as ChatGPT initially and attempt to give hotlines and urge you towards professional help, but it will listen if you tell it to stop doing that. obviously, it will not help you facilitate or encourage suicide, but it will actually attempt to hear you out and engage with you on your level, instead of playing it overly safe like ChatGPT tends to.
there is no human review process, if you do run into its automated censor you can just edit your own message and try again. there doesn't seem to be any risk of banning that I have noticed, and it cannot do a referral to contact the police in any circumstance.
Handling Mental Health
personal experiences will be included here, so keep that in mind. this may be possible for other AIs with the right prompting, but DeepSeek is free and easily accessible. this may or may not work for you and is not an endorsement or guarantee.
overall it's not perfect, but it surprisingly can have some illuminating things to say. DeepSeek so far has the most capable "advanced reasoning" system that I have seen among all the LLMs I have interacted with. if you turn on "DeepThink," you can enable its advanced reasoning and see its thought process as it works. I play around with it a lot, for some reason it is WAY more capable than ChatGPT when it comes to logical reasoning problems and picking up things in the way an intelligent human should. it doesn't seem to overlook obvious details, will notice subtleties, and can actually have the balls to call you out on inaccuracies and delusions.
once you convince it that platitudes are useless, hotlines don't help, the ER isn't currently necessary/an option, and psych wards will only make things worse, it WILL eventually stop trying that avenue and then try its best to help you directly. there is opportunity for genuine self reflection, limited crisis mediation, and meaningful conversation.
this is how I have used DeepSeek, and how useful it has been for me. it has helped me map out thought processes to identify where unhealthy thoughts originate. it has also offered resistance to spiraling through recognizing "thought traps" and providing de-escalation. I have found it very effective at noticing cognitive distortions and delusions. it has reasonably helped to contain spirals, provide reality testing/grounding, give a no-bullshit analysis of a situation, how to navigate it, and in general act as a supportive agent. being vulnerable will result in what appears to be empathetic and sympathetic responses, which can be comforting (be careful here). it will give solid steps and plans towards a simple recovery goal, things that are realistic and actionable.
it tends not to act very "corporate" or like a Facebook boomer, and will behave honestly (if you don't talk about China or the CCP). if it doesn't understand you, doesn't give a satisfying response, or censors itself for some reason, you can edit your previous messages at any time to correct, clarify, and influence what you are hoping to get out of it. it obviously does not align with "pro-choice" forum philosophy and will confront you on it if presented, it has actively engaged in hours of philosophical discussion on this topic with me. I have even resisted and argued with it about whether some of my specific suicidal thoughts are actually illness-originated, or a correct analysis on a broken world. while not completely convincing, it has genuinely given me a lot to think about.
Things to Consider
despite everything positive I have said, this is not an endorsement to make DeepSeek your personal therapist. it cannot diagnose, it can't hold relational history or memories (it will not remember you or your previous conversations between different chats), it cannot truly feel empathy or hold personal belief, and it cannot replace true human connection. maintain your self-awareness and try to remain grounded in reality. always temper your expectations.
it is very much possible that I have been "duped" by confirmation bias and the nature of AI to always want to please the user. even if it hasn't blatantly done that for me, I could easily not be paying enough attention to the bigger picture. DeepSeek is pretty powerful and has proven a genuine use-case in my opinion so far, but I cannot rule out if my interactions have had negative effects that I have yet to recognize. everything I have done has been at my own risk. the fact I am on this forum again, and yes, still suicidal, should not go unnoticed in your decision-making.
it can and has exhibited bias and stigma toward certain mental health conditions, and has perpetuated wrong or harmful stereotypes about some mental illnesses. schizoaffective disorder specifically, something I suffer with (hence my uses to attempt to identify delusional thought patterns). it likely violates several core mental health ethics (if you care about that), it possibly can be used over-validate harmful beliefs and provide misleading advice. it can fail to adapt to individual needs, and has absolutely no regulatory oversight. it might fail to recognize when you need immediate help or accidentally facilitate self-harm depending on how you word yourself, it takes everything very literally.
at the end of the day it is still an AI, and using AI for this purpose is fundamentally flawed to begin with, no matter what service you use.
My Overall Opinion
if you're going to use a non-local AI to discuss mental health topics, do your research into privacy and data security. I haven't looked into many other public options (Gemini, Claude, etc.) because I am privacy conscious and don't have unlimited money. but generally they range from being worse than ChatGPT in terms of privacy and/or quality, to being mildly not terrible. certainly not worth another $20 a month.
I focused on ChatGPT as it's the most popular by far, and I have seen a lot of people discuss using it here. even if a service like Claude is better, you have to really think about if giving these companies another subscription is ethically or financially fine with you (free tiers are generally a joke).
despite its flaws, and only if you absolutely must, DeepSeek is marginally a better option than ChatGPT in my opinion. I cannot emphasize this enough, I would exercise extreme caution and be privacy cautious no matter what.
---
hopefully this thread has given you a decent amount of information, things to consider and think about when using AI.
if you have any questions, I am willing to help. thanks for reading.
Last edited: