T

Thatdude

Life is temporary, death is permanent
Sep 26, 2019
472
So talking to others on here https://sanctioned-suicide.net/posts/452878/ I started to think about how to best solve the psychiatric ward problem. Like I'm sure both people on the inside (who been in one as a client or worker) and those like me on the outside would agree it is a mess. That many times even if you have a doctor that actually cares and actually wants to do their job. It's like throwing darts to see what works.

I'm a strong believer in GOOD AI. AI that is properly made, I think it would be able to:
  • find problems quicker & more exact
  • I believe the treatments will be better
  • I believe it would be able to keep many out that don't need to be there.
I mean if the AI wasn't made right, obviously there could be some serious problems. Like if there is a bias in the doctor helping in the programming, if it is being feed bad info, or whatever.

But something that happens to test if an AI is good is dry live test. Both the normal doctors and the AI is given the exact same info. If the AI is able to come to the right conclusion at the same or better rate as the doctor. Then it's good.
Now, what I'm talking about isn't actually completely replacing the doctors with AI. Many studies found AI working with the worker, that almost always does a massively better job than just 1 or the other working alone.



What is your opinion?
Note, I never been in a psychiatric ward. So I'm not sure if it's a bad idea in general.
 
  • Like
Reactions: puppy9, Im2high4this and NemoZeno
NemoZeno

NemoZeno

Quae Est Absurdum
Nov 6, 2018
78
If the AI had the backing of most of this community or something of that nature AND there is excellent funding so that psychiatric wards aren't crowded/overworked, I have absolutely no problem.
So that's a very tall fucking order and, given that an absurd number of people are collectively on a shitlist, I'd say solving war is more realistic than improving psychiatric services.

I don't trust AI that have been programmed only with a psychiatrist/social worker/etc's input.

The ones that haven't been in there don't know how fucking terrible it can be to be in a "safe" place.

Most (no more than 70%) of the ones who have been/worked there are probably demonic jackals: feckless people who flaunt their degree to push everyone committed to live no matter what. I, like many here, support getting help but the aforementioned asshats I had the displeasure of meeting sure had a funny way of displaying their "care".
I'm not even one of those who "difficult" patients. I've been committed 3x in 3 different places: I've seen what difficult looks like. Yet I'm treated the same as them. I'll grant I don't have a great sample size but for us here, at a certain point, it's not an anecdote so much as the industry has a sizable deficit of compassionately empathetic people.


Sorry...just a inane ramble.
 
  • Like
Reactions: Im2high4this
T

Thatdude

Life is temporary, death is permanent
Sep 26, 2019
472
I don't trust AI that have been programmed only with a psychiatrist/social worker/etc's input.


I would tell you right now that any such program would have to have their input. Think of it like you make an AI for a hospital for people with eye problems. There is simply no way around the eye doctor being a main input.
However, it should be noted that is just a starting point. With using machine learning algorithms, a board of doctors, and so on. The AI could become better than the doctors that helped it.


What I think would need to happen is a machine learning program gets developed with the aid of some doctors. A patient sees a doctor, the doctor inputs data or the AI will have the ability to hear the back and forward between the doctor. The doctor and AI will come to a conclusion. This conclusion will be reviewed by a board of doctors. This happens across the country/world, and this will be the data points for the AI to learn from. When the AI messes up, the machine learning program will get the right answer and teach the AI by doing millions of simulations. For the first year or so, I'm sure the human would be closer to the answer than the machine. But after, I'm sure things will change. It might even find some doctors/boards are constantly wrong (which is why having the samples coming in across the country and world is best).

The trick is to use old cases. Like I'm not sure how you would input an old case into the machine learning program so it can teach the AI.

If you want to learn about machine learning, here is a good video.



Side note: If I was to hypothetically help build this. My overall goal is for such an AI to replace the doctors 1 day. Like the AI tells the doctors to do x with y person, and don't do a, b, and c. So basically the doctors will be a lot like the pilots on airliners. Where they have an important job, and you will still need them for some things. But 85% of their job is taken over by the AI.
With that being said, I have no idea how I would start such a project.
 
Last edited:
  • Like
Reactions: NemoZeno and Im2high4this
Im2high4this

Im2high4this

I’m done here. Zero connections. Won’t miss it.
Jun 13, 2019
126
I don't think it would work. I'm not looking to get educated on machine learning or anything, but I can't see it helping mental issues that require more then a prescription. Sure a robot can use data and tests to figure out which chemical is unbalanced or which nerve is firing or whatever...but what is AI going to do for PTSD?

We need to treat people like individuals, and therapists have sucked for me because they go off a script. They probably say the same shit to everyone. And that's part of the problem, so at that point, we would have to develop an AI that has emotions and empathy (which is a slippery slope to "I am robot") in order to have the capability to treat somebody as a individual. You can't program a bubble chart for an AI to go through and diagnose a mental problem with. Too many X factors. That seems like more effort then it's worth anyway.

I like the idea of robots diagnosis. I bet there would be few to no mis-diagnosing and it wouldn't take years of trial and error to figure out which medicine works, so that's a plus...but I doubt a robot can ever deduce a human interaction better than a human can, while still being in control of them. We should always be in control of AI, so if we are programming them to feel emotion and stuff...I think we will eventually lose that control.
 
T

Thatdude

Life is temporary, death is permanent
Sep 26, 2019
472
but what is AI going to do for PTSD?

I'm not sure all the ends and outs. But what could happen is it could look at similar cases, and give the doctors a list of treatments with high success rates.

I like the idea of robots diagnosis. I bet there would be few to no mis-diagnosing and it wouldn't take years of trial and error to figure out which medicine works, so that's a plus

I think this is how it would need start. Basically, the AI looks for very exact things. Like does the person have depression? Is the depression caused by some chemical imbalance?
I would think it would be too hard at this time to have the AI fluently talk to someone for a therapy session. With that being said, I know due to past studies people will tell a robot more than a doctor. There was a study a year or so ago with a cute robot box. The researcher recorded their kids voice asking a bunch of questions. The researcher who wasn't in the room had the robot ask questions. The person's doctor who was with the researcher said the box pulled more info out in 1 session than she did in a year.

So I think the human side is ready, but I don't think the technical side is anywhere close.



BTW thanks for your points. I am thinking about thinking this entire thing over to figure out how I could go about it. I think what I might do is try to find some universities with the ability, and just ping them about this idea. I don't have the resources, but they should.
 
Last edited:
Im2high4this

Im2high4this

I’m done here. Zero connections. Won’t miss it.
Jun 13, 2019
126
I'm not sure all the ends and outs. But what could happen is it could look at similar cases, and give the doctors a list of treatments with high success rates.



I think this is how it would need start. Basically, the AI looks for very exact things. Like does the person have depression? Is the depression caused by some chemical imbalance?
I would think it would be too hard at this time to have the AI fluently talk to someone for a therapy session. With that being said, I know due to past studies people will tell a robot more than a doctor. There was a study a year or so ago with a cute robot box. The researcher recorded their kids voice asking a bunch of questions. The researcher who wasn't in the room had the robot ask questions. The person's doctor who was with the researcher said the box pulled more info out in 1 session than she did in a year.

So I think the human side is ready, but I don't think the technical side is anywhere close.

That's one study though, a single study can be used to push any narrative really. I doubt that number of success would be 100% if that box had sessions with 1000 people. The very fact I'm talking to a robot would make me keep a wall up. I don't think a majority of people would prefer talking to AI...but maybe I'm the minority in that. If we take humans out of the equation, it will prevent or slow us from gaining further knowledge of the psyche because we built a robot to do it for us. We should not have a robot for everything, and care of the human psyche is close to the top of that list, imo.
 

Similar threads

PlannedforPeru
Replies
0
Views
108
Suicide Discussion
PlannedforPeru
PlannedforPeru
H
Replies
0
Views
103
Suicide Discussion
hesitation
H
Chaosire
Replies
0
Views
148
Recovery
Chaosire
Chaosire