NoLoveNoHope

NoLoveNoHope

Mage
Mar 25, 2023
566
This is a question that's been on my mind for a while. I'm more aware of the large scale data collection that happens on users of various websites like google, facebook etc. I am aware of one case where one man was trying to send medical photos of his toddler and it got automatically uploaded to google servers where it was flagged for being explicit and the police was notified.

I do believe this is possible and much easier to implement than searching pictures for illegal content as it's just text being sent over messaging platforms. Let's say you opened up to someone on facebook messenger about how you want to CTB on this date and an algorithm detects what you said and immediately tries to find your real identity and notify the police on you. It's quite a terrifying thought for me personally because of how real the threat actually can be.

I put [Hypothetical] in the title because I'm not aware of this happening but it CAN happen. It is especially terrifying in countries where the right to die is ignored and sites like sasu are heavily censored where it is possible for a law that forces companies to send data for AI flagged suicidal ideation to the government which I would imagine would result in a welfare check or admission.

What are your thoughts on this?
 
  • Like
  • Informative
Reactions: LoiteringClouds, Homo erectus, Praestat_Mori and 6 others
E

eashanm

God
Feb 22, 2023
512
I don't even understand how police can interfere in someone's life decision. It is a personal decision. This is wrong on so many levels.
 
  • Like
Reactions: depresso.espresso, myusername890, Homo erectus and 2 others
Holu

Holu

Hypomania go brrr
Apr 5, 2023
673
I don't even understand how police can interfere in someone's life decision. It is a personal decision. This is wrong on so many levels.
To be fair they can't do that much. Assuming ur in America, the worst they can realistically hit you with is 72 hours(technically 14 days but you just gotta be smart and act like your getting better during your 72 hours).

That's not to say it's a bitch and a half to sit 72 hours with what feels like constant harassment by hospital employees, no phone, and no shoelaces. Still tho, it's not like they are achieving much lmao.
 
  • Like
Reactions: Homo erectus, Toward Zero and Praestat_Mori
F

Forever Sleep

Earned it we have...
May 4, 2022
9,829
I can absolutely see this happening. I'm sure the technology is there. I've heard that emails are scanned for words like 'suicide'. I don't know if it's true.

I suppose it will depend on attitudes towards suicide moving forward. Plus- suicide rates. If all countries start to see a peak in suicides- you can bet it will make people panic. Especially parents. People will accept gross infringements of privacy if they are lead to believe their children are under threat I'm sure. It does tend to be young suicides that people care the most about.
 
  • Like
Reactions: LoiteringClouds, Homo erectus and Praestat_Mori
LoiteringClouds

LoiteringClouds

Tempus fugit
Feb 7, 2023
3,786
I put [Hypothetical] in the title because I'm not aware of this happening but it CAN happen. It is especially terrifying in countries where the right to die is ignored and sites like sasu are heavily censored where it is possible for a law that forces companies to send data for AI flagged suicidal ideation to the government which I would imagine would result in a welfare check or admission.
I'm afraid of a dystopian scenario described below:

The parliament passes the bill which enables following scheme:

Once the AI identify me as suicidal, police come and detain me, throw me into psych ward involuntarily, and last but not least, the hospital slaps a huge bill in my hand.

The most important point is that this is a business scheme - sending people to psych ward involuntarily and charge them huge fees. Hospitals gain money, and police and AI company take a part of it. All these activities are conducted in the name of "saving lives," and they'll justify involuntary commitment with data, which shows this AI can detect people at risk.
They would say the efficacy of this AI "first responder" is scientifically proven. They might think, "Finding high-risk individuals with their social media postings is easier than predicting stock prices, so let's make money with this scheme."

The only losers are suicidal people, who pay hefty bills. I think they could target people only who has money, who are also identified by the AI.
They lose their savings, while people behind this scheme think they are saving the suicidal. And when people actually CTB despite their "effort," they'll conveniently blame the patients.

I would call it "grand theft autonomy" scheme.

(Of course this is just a hypothetical scenario and it's not easy to know whether a person is genuinely suicidal or not solely based on posting on the internet. I also don't know how lucrative it would be when implemented, but I hope it won't happen.)
 
  • Like
  • Wow
Reactions: sleepyhollow, 0000000000000, Homo erectus and 1 other person
𖣂𖣂𖣂.

𖣂𖣂𖣂.

𖣂
May 26, 2023
165
This is a question that's been on my mind for a while. I'm more aware of the large scale data collection that happens on users of various websites like google, facebook etc. I am aware of one case where one man was trying to send medical photos of his toddler and it got automatically uploaded to google servers where it was flagged for being explicit and the police was notified.

I do believe this is possible and much easier to implement than searching pictures for illegal content as it's just text being sent over messaging platforms. Let's say you opened up to someone on facebook messenger about how you want to CTB on this date and an algorithm detects what you said and immediately tries to find your real identity and notify the police on you. It's quite a terrifying thought for me personally because of how real the threat actually can be.

I put [Hypothetical] in the title because I'm not aware of this happening but it CAN happen. It is especially terrifying in countries where the right to die is ignored and sites like sasu are heavily censored where it is possible for a law that forces companies to send data for AI flagged suicidal ideation to the government which I would imagine would result in a welfare check or admission.

What are your thoughts on this?
Kinda similar to the 9/11 incident where it was an invasion of privacy. Which can make it worse since no one can relate to anyone if things like this are censored people will always find another way even if it comes to breaking the law.
 
  • Like
Reactions: Homo erectus
C

captive

Member
May 31, 2023
52
i would stay away from services like google and facebook as far as possible if so. you should never use them anyways if privacy is a concern. especially if spying levels were that extremely high like you described. theoretically speaking it shouldn't be that hard to disappear from their radars, there is A LOT of privacy oriented stuff out there like telegram, signal, tor browser, some paid email and etc. i also highly recommend using duckduckgo as your primary search engine because it doesn't care about how suicidal you are unlike google. for example, if you had to search something like "nembutal lethal dose" duckduckgo will just find it without that suicide hotline bullshit.
 
  • Like
Reactions: Homo erectus, NoLoveNoHope and pthnrdnojvsc
pthnrdnojvsc

pthnrdnojvsc

Extreme Pain is much worse than people know
Aug 12, 2019
2,737
This is a question that's been on my mind for a while. I'm more aware of the large scale data collection that happens on users of various websites like google, facebook etc. I am aware of one case where one man was trying to send medical photos of his toddler and it got automatically uploaded to google servers where it was flagged for being explicit and the police was notified.

I do believe this is possible and much easier to implement than searching pictures for illegal content as it's just text being sent over messaging platforms. Let's say you opened up to someone on facebook messenger about how you want to CTB on this date and an algorithm detects what you said and immediately tries to find your real identity and notify the police on you. It's quite a terrifying thought for me personally because of how real the threat actually can be.

I put [Hypothetical] in the title because I'm not aware of this happening but it CAN happen. It is especially terrifying in countries where the right to die is ignored and sites like sasu are heavily censored where it is possible for a law that forces companies to send data for AI flagged suicidal ideation to the government which I would imagine would result in a welfare check or admission.

What are your thoughts on this?
Imo that will be one of the main functions of Google ai and meta ai to stop suicide.

Google is evil. Inmendham agrees
 
  • Like
Reactions: Homo erectus
lachrymost

lachrymost

finger on the eject button
Oct 4, 2022
344
existentialgoof talks about this kind of thing. It's long overdue to stand up for our right to die in this brave new world.
 
  • Like
Reactions: Homo erectus

Similar threads