B

bosschop

New Member
Mar 11, 2023
2

It's more focused on the technical aspects in terms of the content itself, and considering its an academic paper it's wordy as fuck, but importance lies in data collection and pre-processing.

"Using network data generated from over 3.2 million unique interactions of N = 192 individuals, n = 48 of which were determined to be highest risk users (HRUs), a machine learning classification model was trained, validated, and tested to predict HRU status."
"A complete record of posting activity within the "Suicide Discussion" subforum of Sanctioned Suicide, from inception on March 17, 2018 to February 5, 2021, was programmatically collected and organized as tabular data using a custom Python (v3.8) script that primarily leveraged the BeautifulSoup package to parse the site's HTML and XML information.34 This effort resulted in a dataset containing more than 600,000 time-stamped posts across nearly 40,000 threads and over 11,000 users. This posting activity information consisted of (i) thread title, (ii) thread author, (iii) post author, (iv) post date, (v) post text content, and (vi) direct mentions and references to other user comments within the post text. All information, except for post text, was used in this study. To impose an added layer of user anonymity, each username was automatically assigned a randomly generated, 32-character hashed ID. These de-identifying IDs were automatically replaced with all instances of users' online handles within the data prior to subsequent preprocessing and analysis."
"A structured approach to select a subset of appropriate users and identify HRUs was devised based on the findings discussed through the New York Times investigation into Sanctioned Suicide35 as well as the authors' thorough review of the forum content. Moreover, this strategy was described and utilized in a previously published analysis of users on the Sanctioned Suicide forum.33 To reiterate herein, data was first filtered by searching for thread titles with the following keywords/phrases: "bus is here," "catch the bus," "fare well," "farewell," "final day," "good bye," "goodbye," "leaving," "my time," "my turn," "so long," and "took SN." Of note, "catch the bus" is a euphemism adopted by the community to symbolize suicide,33 while "SN" is short for sodium nitrate, an increasingly popular chemical used in suicide-related methodology. These terms were used to identify "goodbye threads" on Sanctioned Suicide, and thus have the highest probability of signaling for an impending attempt"
In
simpler terms, a large amount of data was collected by a university research group, which out of these posts, which where then analysed for words. Not that this is of any interest to me personally but @RainAndSadness, are there counter-measures against these types of stuff or did you have knowledge of this?

They even said: "Accordingly, written informed consent was waived and this study "exempt" from further review."

t
 
  • Informative
  • Aww..
  • Wow
Reactions: mlha, notadaisy, vampire2002 and 16 others
dumbnhappy

dumbnhappy

just say it ditto
May 22, 2024
35
1. Using Noindex Meta Tags
You can protect your website from AI-powered content-creating tools by using noindex meta tags. These tags are added to the HTML code of your web pages and prevent LLMs from crawling those specific pages of your website.

Indexing is the process that makes your pages accessible to web crawlers used by search engines to help them determine the relevance of your content and assign rankings accordingly.

Example of using a noindex tag in HTML code.
By using noindex tags, you are preventing web crawlers from accessing the information available on your respective pages and adding it to the database. This protects your website content and prevents its use for the training of large language models.

However, you have to be extra careful if you choose to go with this method. You would not want to make your website inaccessible to search engine crawlers, as it may have a tremendous impact on your rankings.

Plus, if you're blocking certain web crawlers from accessing your pages through the robots.txt file method, there is no need for you to use noindex meta tags.

So, either you go with the first method listed in this article or this one. It's solely a matter of preference and what you perceive as a more effective method.

2. Requiring Authorization for Content Access
If you are worried about your website content being accessed and used by AI-powered tools, you can consider restricting public access.

This means allowing only certain users with the provided credentials to access the content published on your website.

By doing this, you would effectively block web crawlers from accessing your content and ensure the uniqueness of the information you publish.

Browser authentication login form.
However, there's a downside to implementing this method, as it can severely hamper your website's growth.

By allowing only authorized visitors to access your content, you kind of give up on the opportunity to cast a wider net and attract a relevant audience. Consider it a price to pay for keeping your data confidential.

So, this may not be a viable approach to consider for websites that have just gotten started or require their content to be accessed by the general public to keep the needle moving.

3. Using Gated Content
This method is a watered-down version of the one mentioned above. Here, the content published on your website is still available for public access, but you need visitors to provide you with certain information to view it.

For example, you may ask your visitors to provide you with their names, email addresses, and contact information to view the content published on your website.

Get started form for gated content.
When you use a form to gate the content published on your website, it restricts the bots from crawling that particular post or page and minimizes the likelihood of your content being used by large language models or AI-powered tools.

The addition of an extra step to access the required information through your website may affect the user experience of your visitors. However, it's a small compromise compared to the benefits associated with this measure.

4. Adding CAPTCHA
CAPTCHA stands for Completely Automated Public Turing Test to Tell Computers and Humans Apart. It's a computer program that protects websites from content-scraping bots and automated information extraction by distinguishing human input from that of a machine.

So, implementing it on your website qualifies as an effective method that protects your content from the rise of AI.

Just like the gated content approach, implementing CAPTCHA would add an additional step for your visitors to complete in order to access the required content.

Google reCaptcha checkbox.
The system may ask your visitors to complete an easy-to-solve puzzle or perform a certain action, like entering the text displayed in the form of an image.

If you think that the gated content method would not be best suited as your visitors may not be comfortable sharing their personal information, then adding a CAPTCHA to your site is the way to go.

It prevents AI-powered tools from accessing your content and helps you deal with content duplication issues.

However, just like other methods similar to this one, adding a CAPTCHA to your website may also affect the user experience. So, a little drop in returning visitors is to be expected.

5. Leverage Copyright Laws
You can protect your website content from AI-powered tools through copyright protection. This method requires you to leverage copyright laws that act as a safeguard against duplicate or plagiarized content.

All it requires is for you to add a disclosure or copyright notice to each of your pages, clearly stating legal terms or policies. This acts as a warning to others who may intend to use your content without your permission.

Copyright word in dictionary.
Upon detecting an infringement, you can take advantage of the Digital Millenium Copyright Act (DMCA) and get the content removed that's been published without your consent.

All it takes for you is to send a takedown notice to the publisher that uses your content, and they'll be legally bound to remove it. And if they don't comply with your request for some reason, you have the right to take them to court, where the decision would likely go in your favor.

So, even if the information published on your website is used for training large language models, you can minimize the likelihood of content scraping or duplication by copyrighting your website.

from https://www.copyrighted.com/blog/protect-website-from-ai
 
  • Like
  • Informative
Reactions: LostLily, notadaisy, vampire2002 and 7 others
Dr Iron Arc

Dr Iron Arc

Into the Unknown
Feb 10, 2020
21,154
Don't worry guys, I got this. I know how to confound the AI.

Pee pee poo poo.
 
  • Yay!
Reactions: KuriGohan&Kamehameha, notadaisy, SVEN and 8 others
charaunderground

charaunderground

* Let justice be done.
Nov 29, 2024
72
> ""SN" is short for sodium nitrate"

Well, at least we know they're probably not gathering all too accurate data, I guess?
 
  • Yay!
  • Like
Reactions: compulsoryaliveness, milkcarton, LostLily and 22 others
LaVieEnRose

LaVieEnRose

Angelic
Jul 23, 2022
4,243
Great. Can't even suffer in peace without being treated as some curiosity. Did they draw any other conclusions did they draw form reviewing untold pages of content besides "prevent, prevent, prevent!!!!!!"?

Maybe they should have included the spring of 2023 in their review when welfare checks were conducted after IC's bust and see how that fostered the welfare of users here. Since that's the kind of measures whose promotion inevitably follows from this kind of study.
 
  • Like
  • Hugs
  • Informative
Reactions: GlassMoon, notadaisy, vampire2002 and 8 others
D

Douggy82

Member
Nov 4, 2024
38
I, for one, welcome our new insect overlords.
 
  • Yay!
  • Like
Reactions: notadaisy, Hollowman, NormallyNeurotic and 1 other person
L'absent

L'absent

À ma manière 🪦
Aug 18, 2024
710
I'll kill myself eating all the boxes of chocolates in the supermarket.
 
  • Yay!
  • Like
Reactions: compulsoryaliveness, notadaisy, vampire2002 and 6 others
ms_beaverhousen

ms_beaverhousen

-terminally sad-
Mar 14, 2024
1,268
Did you happen to see this? It's similar. So much so I was having whiplash thinking this was a breakdown of the enclosed article. Dizzying.
 
  • Like
Reactions: Mateira and Dr Iron Arc
PhilosopherInAV0id

PhilosopherInAV0id

The Reaper of Self, Amid the Silence
Jan 28, 2024
32
Oh holy fudge no. I looked at this site as a safe haven to chat with others of a similar mind, not to be STARED AT ON DISPLAY LIKE A MONKEY IN CAGE!!!! If they want to study us, then go ahead and talk to us, 1-on-1, and say it to our face, and if they can't handle that, then they shouldn't even bother violating our personal rights that we went on this site to try to protect and preserve, where we went to get AWAY from this kind of thing.

Sorry for the rant. This just feels VERY stupid and frustrating in all kinds of ways to me.
 
  • Like
  • Hugs
Reactions: GlassMoon, LostLily, notadaisy and 3 others
opheliaoveragain

opheliaoveragain

Eating Disordered Junkie
Jun 2, 2024
1,334
thanks I hate it.









(joke. thank you for the info OP🤍)
 
  • Yay!
Reactions: ForestGhost
ForestGhost

ForestGhost

The ocean washed over your grave
Aug 25, 2024
114
Good lord not another one. These people don't give a shit about us, they're just milking this website to get a publication and feather in their cap for doing "socially conscious research". Complete charlatans.
 
  • Like
Reactions: Mateira, notadaisy, vampire2002 and 2 others
MentalFuneral

MentalFuneral

Member
Sep 11, 2024
56
AI can suck my asshole
 
  • Like
Reactions: notadaisy, Roadrunner, divinemistress36 and 1 other person
ScaredOfMachines

ScaredOfMachines

I am who I am
Nov 8, 2024
78

It's more focused on the technical aspects in terms of the content itself, and considering its an academic paper it's wordy as fuck, but importance lies in data collection and pre-processing.

"Using network data generated from over 3.2 million unique interactions of N = 192 individuals, n = 48 of which were determined to be highest risk users (HRUs), a machine learning classification model was trained, validated, and tested to predict HRU status."
"A complete record of posting activity within the "Suicide Discussion" subforum of Sanctioned Suicide, from inception on March 17, 2018 to February 5, 2021, was programmatically collected and organized as tabular data using a custom Python (v3.8) script that primarily leveraged the BeautifulSoup package to parse the site's HTML and XML information.34 This effort resulted in a dataset containing more than 600,000 time-stamped posts across nearly 40,000 threads and over 11,000 users. This posting activity information consisted of (i) thread title, (ii) thread author, (iii) post author, (iv) post date, (v) post text content, and (vi) direct mentions and references to other user comments within the post text. All information, except for post text, was used in this study. To impose an added layer of user anonymity, each username was automatically assigned a randomly generated, 32-character hashed ID. These de-identifying IDs were automatically replaced with all instances of users' online handles within the data prior to subsequent preprocessing and analysis."
"A structured approach to select a subset of appropriate users and identify HRUs was devised based on the findings discussed through the New York Times investigation into Sanctioned Suicide35 as well as the authors' thorough review of the forum content. Moreover, this strategy was described and utilized in a previously published analysis of users on the Sanctioned Suicide foru m.33 To reiterate herein, data was first filtered by searching for thread titles with the following keywords/phrases: "bus is here," "catch the bus," "fare well," "farewell," "final day," "good bye," "goodbye," "leaving," "my time," "my turn," "so long," and "took SN." Of note, "catch the bus" is a euphemism adopted by the community to symbolize suicide,33 while "SN" is short for sodium nitrate, an increasingly popular chemical used in suicide-related methodology. These terms were used to identify "goodbye threads" on Sanctioned Suicide, and thus have the highest probability of signaling for an impending attempt"
In
simpler terms, a large amount of data was collected by a university research group, which out of these posts, which where then analysed for words. Not that this is of any interest to me personally but @RainAndSadness, are there counter-measures against these types of stuff or did you have knowledge of this?

They even said: "Accordingly, written informed consent was waived and this study "exempt" from further review."

t
Christ, AI-users are so fucking hungry to infringe on people's rights that they can't even let people die in peace anymore. It makes me sick. @RainAndSadness Bosschop has a point, are there any protections or plans for protections against this from happening in the future?
 
Tommen Baratheon

Tommen Baratheon

1+1=3
Dec 26, 2023
338
4. Adding CAPTCHA
CAPTCHA stands for Completely Automated Public Turing Test to Tell Computers and Humans Apart. It's a computer program that protects websites from content-scraping bots and automated information extraction by distinguishing human input from that of a machine.

So, implementing it on your website qualifies as an effective method that protects your content from the rise of AI.

Just like the gated content approach, implementing CAPTCHA would add an additional step for your visitors to complete in order to access the required content.

Google reCaptcha checkbox.
The system may ask your visitors to complete an easy-to-solve puzzle or perform a certain action, like entering the text displayed in the form of an image.

If you think that the gated content method would not be best suited as your visitors may not be comfortable sharing their personal information, then adding a CAPTCHA to your site is the way to go.

It prevents AI-powered tools from accessing your content and helps you deal with content duplication issues.

However, just like other methods similar to this one, adding a CAPTCHA to your website may also affect the user experience. So, a little drop in returning visitors is to be expected.
Recently, I was on a forum that used reCAPTCHA: total PITA. And Google reCaptcha checkbox? So they can collect data on us?
 
F

Forever Sleep

Earned it we have...
May 4, 2022
9,809
What's the end goal though? To identify people and stop them attempting? That's where I thought it was going initially. To use AI to identify members and send police around when they post a goodbye thread. That would be a massive invasion of privacy. I wonder if that's even legal. I expect it will become possible. Maybe it already is.

Doesn't surprise me at all that they would test out the technology (without permission) prior though.

I don't think I'd risk a 'live' goodbye thread regardless to be honest. Just to be on the safe side. Just to cut out any risk of a human or non human interfering.

It makes me wonder where it will go though in the future. Will that happen? Will people's state of mind be assessed by AI via their online presence and intervention be forced on them? Sanction Suicide via carrier pigeon or pen pals. What do we think guys?

Just makes me so glad I'll be dead before technology gets better and more devious and that I've brought no children here to try and navigate through it. I guess future generations are going to breed the very best in espionage. To be able to dodge all the ways we'll be monitored in future, they'll have to be super spies.
 
  • Like
  • Aww..
  • Hugs
Reactions: KuriGohan&Kamehameha, vampire2002, savory and 1 other person
Roadrunner

Roadrunner

Student
Mar 18, 2024
180
Ever watch the movie Snowden? Big brother has been watching long before AI, imho
 
SilentSadness

SilentSadness

The rain pours eternally.
Feb 28, 2023
1,123
I just think it's pointless to study a forum if they aren't going to read what members have to say, considering they only designated "high risk users" as people with confirmed attempts that just proves that the only thing they care about is preventing suicide. You can suffer infinitely, as long as you're not likely to kill yourself, otherwise they are working towards tracking and finding you. That's a seriously disappointing use for this technology, they just can't help themselves but use new technology for its worst most dystopian possible use. There wasn't a single hint of empathy or sympathy in the article, but plenty of excitement for new ways to prejudice against suicidal people as they engage in the terrible "Suicidal Thoughts and Behaviours". So it's inevitable that I would be a misanthropist, too many people act like this.
 
  • Aww..
Reactions: vampire2002
-Link-

-Link-

Deep Breaths
Aug 25, 2018
588
On being the non-consensual subjects of studies, this forum does offer a unique dataset that likely couldn't be replicated in any other way given the taboo nature of suicidality and the reporting requirements when the person is identifiable.

Scientifically or otherwise, I'd expect this forum to be one of the most monitored online communities on the entire open web and that if the whole truth of it was known, probably barely anybody would ever post here.

This is just a gentle reminder that we are being watched, always.

I certainly understand the world's interest in us, but I feel like we're constantly being branded as lab rats or aliens or subhuman, death-mongering monsters and that this somehow makes it OK to poke, prod, belittle, and bully us.

I'm not sure I've ever seen any outsider try to simply understand us as human beings and talk about the wider societal factors that would lead to this site's existence in the first place and lead so many people to seek it out and engage with it.

It's so profoundly disappointing that this short-sightedness continues to be the way.
 
  • Like
  • Informative
  • Love
Reactions: compulsoryaliveness, vampire2002, needthebus and 3 others
CTB Dream

CTB Dream

Injury damage disabl hard talk no argu make fun et
Sep 17, 2022
2,613
this ape speces prtnd lrn prtnd knw this all lie, ape speces have adv tch no use nly make hard ctb, awfl scum prolif speces, see ai do many cncpt no any do nly do rsrch sucdl, rly awfl ape
 
nir

nir

27/F/Canada
Aug 18, 2024
300
well, we can all do our part by just commenting incorrect shit to throw off the AI

I can't wait to commit suicide by oxygen inhalation! I can't wait to die from drinking 2L of water daily! I can't wait to kill myself from 8 hours of uninterrupted sleep each night!
 
  • Yay!
Reactions: GlassMoon, -Link-, LostLily and 1 other person
needthebus

needthebus

Longing to Becoming HRU
Apr 29, 2024
236
the study seemed fairly ethical to me

they were using language terms to try to guess who actually died

then they were looking at interactions between users and threads to see who was more at risk

some of their conclusions were things like if you interact with a few random people that are not linked in a social group (like your contacts have fewer connections) your risk is higher. some of their conclusions I didn't understand. The study author is smarter than I am so I couldn't understand it all, or perhaps I could if I learned more.

this didnt to me seem about trying to figure out who was posting or even to stop people. it seemed to be more about identifying social media data to indicate who is at high risk

some people are high risk and want help. it's really tough to study the highly suicidal due to the mental health industry's unethical rules and conduct that result in people afraid of care.

in a normal setting (facebook) how do you study who wants to die?

the article is not as bad as it seems and this isn't about stopping us if we want to but more about just stidying suicide as a phenomenon.

they also did things to anonymize the datasets even while studying it. this study seems respectful of those involved and the site from what I read, although any publicity of this site is bad
 
Last edited:
  • Informative
  • Like
Reactions: -Link- and charaunderground
B

bosschop

New Member
Mar 11, 2023
2
the study seemed fairly ethical to me

they were using language terms to try to guess who actually died

then they were looking at interactions between users and threads to see who was more at risk

some of their conclusions were things like if you interact with a few random people that are not linked in a social group (like your contacts have fewer connections) your risk is higher. some of their conclusions I didn't understand. The study author is smarter than I am so I couldn't understand it all, or perhaps I could if I learned more.

this didnt to me seem about trying to figure out who was posting or even to stop people. it seemed to be more about identifying social media data to indicate who is at high risk

some people are high risk and want help. it's really tough to study the highly suicidal due to the mental health industry's unethical rules and conduct that result in people afraid of care.

in a normal setting (facebook) how do you study who wants to die?

the article is not as bad as it seems and this isn't about stopping us if we want to but more about just stidying suicide as a phenomenon.

they also did things to anonymize the datasets even while studying it. this study seems respectful of those involved and the site from what I read, although any publicity of this site is bad
Nice insight
 
Emeralds

Emeralds

Student
Aug 29, 2024
143
There have been researchers lurking on here for along time. It's no surprise. This forum is a goldmine of information. People say things on here that they wouldn't normally say. It's like stepping into someone's to someone's head and watching their thought processes.

People on here take it for granted that everyone understands what it's like, but most people really don't. It's a mystery to them why someone would want to die or why someone keeps saying that for years and never does it.

People can learn a lot just by observing the forum. I don't see a problem with this. They anonymize the data. They aren't stopping anyone. The information can be used to help people in the future once there is a better understanding about suicidal ideation and how it affects people in their daily life.
 
D

Douggy82

Member
Nov 4, 2024
38
What's the end goal though?

To improve their torture programs. The study in OP's post is a legitimate public study. The military doesn't publish their studies and their technology is much more advanced. They track SS posts and match them up with real world identies. From there, they can judge the effectiveness of their torture techniques and life destruction programs.

They can refine them and make them more effective. It's like digital waterboarding. They gain a greater understanding on what kills people, what levels of pain they can before death occurs, etc. These people are psychopaths. They like to keep people alive and in pain and see what levels of pain they can tolerate.

We're lab rats. What happens when you inject this rat with 50ccs of this drug? What happens when you take all this lab rat's money and give him a terrible disease? They're experimenting on us and collecting detailed results with the goal of making their torture programs more effective.
 
  • Like
  • Informative
Reactions: needthebus, Forever Sleep and ijustwishtodie
d3m1g0d

d3m1g0d

I seek undetectable, low risk methods
Jun 27, 2023
9
AI can suck my asshole
I digress here, but AI is after all just a tool that either us or those bastards can use for better or worse, this is just like insulting a gun because it kills people
 
  • Like
Reactions: Douggy82
KuriGohan&Kamehameha

KuriGohan&Kamehameha

想死不能 - 想活不能
Nov 23, 2020
1,738
What's the end goal though? To identify people and stop them attempting? That's where I thought it was going initially. To use AI to identify members and send police around when they post a goodbye thread. That would be a massive invasion of privacy. I wonder if that's even legal. I expect it will become possible. Maybe it already is.

Doesn't surprise me at all that they would test out the technology (without permission) prior though.

I don't think I'd risk a 'live' goodbye thread regardless to be honest. Just to be on the safe side. Just to cut out any risk of a human or non human interfering.

It makes me wonder where it will go though in the future. Will that happen? Will people's state of mind be assessed by AI via their online presence and intervention be forced on them? Sanction Suicide via carrier pigeon or pen pals. What do we think guys?

Just makes me so glad I'll be dead before technology gets better and more devious and that I've brought no children here to try and navigate through it. I guess future generations are going to breed the very best in espionage. To be able to dodge all the ways we'll be monitored in future, they'll have to be super spies.
At the end of the paper you can read their intended aim/goal of the research:

"Pairing these network-based features with other proven digital markers of STB risk may improve data-driven suicide prevention efforts."

All of these AI studies are nearly identical, they just produce models and want to use the models they generate for surveillance and prediction purposes, since the methods available now for trying to guess if a person is suicidal or not are woefully ineffective, as you never know if someone is lying or telling the truth if you hand them a subjective questionnaire.

As usual, it's disappointing. They can thwart attempts by incorporating AI scraper tools into social media and messaging software to harvest and analyse text data, but if you don't address the underlying causes making someone suicidal in the first place, you are not helping or curing anyone. Short sighted thinking from people who are maths/programming geniuses capable of understanding and creating complex AI models. It's cheaper to do these AI studies and publish them though than it is to do experiments and interviews with real human beings, as that requires further ethical approval and consent :'^))
 
  • Like
Reactions: -Link-, katagiri83, Forever Sleep and 3 others
needthebus

needthebus

Longing to Becoming HRU
Apr 29, 2024
236
At the end of the paper you can read their intended aim/goal of the research:

"Pairing these network-based features with other proven digital markers of STB risk may improve data-driven suicide prevention efforts."

All of these AI studies are nearly identical, they just produce models and want to use the models they generate for surveillance and prediction purposes, since the methods available now for trying to guess if a person is suicidal or not are woefully ineffective, as you never know if someone is lying or telling the truth if you hand them a subjective questionnaire.

As usual, it's disappointing. They can thwart attempts by incorporating AI scraper tools into social media and messaging software to harvest and analyse text data, but if you don't address the underlying causes making someone suicidal in the first place, you are not helping or curing anyone. Short sighted thinking from people who are maths/programming geniuses capable of understanding and creating complex AI models. It's cheaper to do these AI studies and publish them though than it is to do experiments and interviews with real human beings, as that requires further ethical approval and consent :'^))
it's complicated because preventing impulsive suicides of people who haven't come to a decision seems like a desirable outcome. some people actually do want help and are too stupid or meak to ask and actually die because of it.

the bad thing about it is contributing towards more surveilance by the government of emotional health, and the government is always pro-life because the worker bee slaves aren't allowed to die without the permission of the government and large corporations.

Perhaps you are right. Fuck this study, there is no way to ethically study suicide when it will be used by a malevolent government.
Bosschop sorry for telling you to fuck off if you are a pure researcher and not a clinician.

Perhaps one day I'll be a high risk user (HSU).

Has needthebus finally got on that great big short bus headed into the sky or down to hell or wherever buses go?

No, needthebus hasn't gotten on the bus. They've merely become a HRU (high risk user). Perhaps they are safe and sipping a pina.colada on a beach and changed their mind after posting their goodbye.

I'm probably too much of a coward to really become a HRU. One day.
Bosschop sorry for telling you to fuck off if you are a pure researcher and not a clinician or perhaps tou are neither. The word insight is nails on a chalkboard to me, and results in Pavlovian naseua, constipation, shaking, and twitching whenever I hear it, even without more brain-weight reducing hell-meds

Perhaps one day I'll be a high risk user (HSU).

Has needthebus finally got on that great big short bus headed into the sky or down to hell or wherever buses go?

No, needthebus hasn't gotten on the bus. They've merely become a HRU (high risk user). Perhaps they are safe and sipping a pina colada on a beach and changed their mind after posting their goodbye.

I'm probably too much of a coward to really become a HRU. One day.
 
Last edited:
  • Like
Reactions: Douggy82 and KuriGohan&Kamehameha

Similar threads

justcallmeJ
Replies
15
Views
1K
Suicide Discussion
opheliaoveragain
opheliaoveragain
M
Replies
3
Views
382
Suicide Discussion
misthios2040
M
AnderDethsky
Replies
3
Views
436
Suicide Discussion
ms_beaverhousen
ms_beaverhousen