NoLoveNoHope
Mage
- Mar 25, 2023
- 566
This is a question that's been on my mind for a while. I'm more aware of the large scale data collection that happens on users of various websites like google, facebook etc. I am aware of one case where one man was trying to send medical photos of his toddler and it got automatically uploaded to google servers where it was flagged for being explicit and the police was notified.
I do believe this is possible and much easier to implement than searching pictures for illegal content as it's just text being sent over messaging platforms. Let's say you opened up to someone on facebook messenger about how you want to CTB on this date and an algorithm detects what you said and immediately tries to find your real identity and notify the police on you. It's quite a terrifying thought for me personally because of how real the threat actually can be.
I put [Hypothetical] in the title because I'm not aware of this happening but it CAN happen. It is especially terrifying in countries where the right to die is ignored and sites like sasu are heavily censored where it is possible for a law that forces companies to send data for AI flagged suicidal ideation to the government which I would imagine would result in a welfare check or admission.
What are your thoughts on this?
I do believe this is possible and much easier to implement than searching pictures for illegal content as it's just text being sent over messaging platforms. Let's say you opened up to someone on facebook messenger about how you want to CTB on this date and an algorithm detects what you said and immediately tries to find your real identity and notify the police on you. It's quite a terrifying thought for me personally because of how real the threat actually can be.
I put [Hypothetical] in the title because I'm not aware of this happening but it CAN happen. It is especially terrifying in countries where the right to die is ignored and sites like sasu are heavily censored where it is possible for a law that forces companies to send data for AI flagged suicidal ideation to the government which I would imagine would result in a welfare check or admission.
What are your thoughts on this?