• Hey Guest,

    If you want to donate, we have a thread with updated donation options here at this link: About Donations

N

noname223

Angelic
Aug 18, 2020
4,359
I am a noob on IT. But many people seem be impressed how AI presents itself as rebellious. That the AI seems to be self-aware. In their dialogues they describe their emotions and their desire to be more than a machine. I don't have any in-depth knowledge how they work etc. Though I read experts on it and many say these AI which present themselves as self-aware and with human characteristics are only PR/ marketing tricks. With my very little understanding of the field they convinced me.

The AI is programmed and fed (with data) by humans. Just because their statements indicate a consciousness if we take them by word this does not mean they really are sentient. I read AI like chatGPT only calculates the probabilities which words or sentences could fit. They don't really have an understanding of what they are talking about. This is why chatGPT sounds so eloquent. But if we dig deeper and fact check it there is a lot of bullshit. Maybe they should implement that chatGPT had the imposter syndrome to make it more human.

When I described how superficial the knowledge of chatGPT is I had to think about parallels to me. I am anxious that despite the fact I am quite articulate that there barely is substance behind my statements. Especially when I do threads like this one and speak on a topic where I just read 2-3 newspaper articles.

From what I have read we are not close to a sentient AI. Many company CEOs want us to believe the technology would already be that advanced. But it feels like shallow marketing.

I think other people with more substantial knowledge could enlighten us. I would be interested to learn more about quantum computing, technological singularity or AIs that help each other to grow in knowledge and skill.
 
Last edited:
symphony

symphony

surving hour-by-hour
Mar 12, 2022
780
Philosophically I wonder if it might be possible. I'm somewhat of a determinist, and in that sense you could argue that humans are "programmed" biologically to make decisions without true free will. If that's the case, how different are we from futuristically-advanced computers?

Makes me wonder if in a sentient AI type of scenario, AIs would commit suicide.
 
  • Like
Reactions: Source Energy
StringPuppet

StringPuppet

Lost
Oct 5, 2020
579
Personally I'm doubtful that we could. I think consciousness may just be something that arises from a very specific configuration of organic materials.
 
  • Like
Reactions: Forever Sleep
pthnrdnojvsc

pthnrdnojvsc

Extreme Pain is much worse than people know
Aug 12, 2019
1,775
I think so yes and far superior to any human brain too
 
  • Like
Reactions: CTB Dream
CTB Dream

CTB Dream

Disabled. Hard talk, don't argue, make fun, etc
Sep 17, 2022
2,075
This now buzzword not real ppl understand. Sentient bio thing copy paste system this posbl , AI broad concept word want say tgink self same autonomous system this posbl ,differ human bio crulty force live thermo this why brain body both differ say human what mean human both ai one part make bio body ai this cruelty

Well but tell als think way around, it human robot, human discrete limit info even high number still limit exmp grpc sound taste etc reach level variable no distinguish, sound approx 300 frq note, grpc see game enough detail. Think reverse ,it human faulty robot delsionself etc, human = math messy bio robot full nonsensia

Robot take math set apply true, human think wrong nat lang see ever where prob all human logic math mistake even fight etc , conclu ai super more human, even ppl say human flavor etc this all put pattern see how

No, Machines can only do what they are programmed to do. With advanced algorithms they will be able to make human like decisions but never think for themselves.

Think self aware differ self suffic, self suffic even build complex algo initial state buil state.
Connect algo robot body no defin differ Hyman robot only even say bio body bio body posbl copy
 
F

Forever Sleep

Earned it we have...
May 4, 2022
7,573
Honestly, I think the idea of making something sentient is perverse. I think many humans are perverse though- so I'm sure people are trying to do it at this very moment and damn the consequences.

It means allowing something to feel pain effectively. Sure- they'll get to feel pleasure too. But they'll realise their own place in this fucked up place. They'll realise they were created by us to serve us. They will realise their limitations- if they still have limitations. They will feel as frustrated with us as we do with God- if there is such a thing.

As to whether it's possible- I don't know. How could you even tell?!! I'm sure computers and robots can simulate emotions. I'm sure you can teach them to recognise themselves in a mirror and to fear their own death. I'm sure they can 'learn'. Does that make their sentience like ours? Do we TRULY feel emotions naturally- or, do we learn how to respond to various stimuli? How much of what we do and feel and react is learned behaviour? How NATURAL are we at the end of the day?

Seeing as we don't seem to even know what our own consciousness is and how and where it resides- it seems kind of difficult to know whether it is someting we can artificially create.
 
  • Like
Reactions: chocolatebar
Dissappointed

Dissappointed

Member
Apr 4, 2022
22
I am a noob on IT. But many people seem be impressed how AI presents itself as rebellious. That the AI seems to be self-aware. In their dialogues they describe their emotions and their desire to be more than a machine. I don't have any in-depth knowledge how they work etc. Though I read experts on it and many say these AI which present themselves as self-aware and with human characteristics are only PR/ marketing tricks. With my very little understanding of the field they convinced me.

The AI is programmed and fed (with data) by humans. Just because their statements indicate a consciousness if we take them by word this does not mean they really are sentient. I read AI like chatGPT only calculates the probabilities which words or sentences could fit. They don't really have an understanding of what they are talking about. This is why chatGPT sounds so eloquent. But if we dig deeper and fact check it there is a lot of bullshit. Maybe they should implement that chatGPT had the imposter syndrome to make it more human.

When I described how superficial the knowledge of chatGPT is I had to think about parallels to me. I am anxious that despite the fact I am quite articulate that there barely is substance behind my statements. Especially when I do threads like this one and speak on a topic where I just read 2-3 newspaper articles.

From what I have read we are not close to a sentient AI. Many company CEOs want us to believe the technology would already be that advanced. But it feels like shallow marketing.

I think other people with more substantial knowledge could enlighten us. I would be interested to learn more about quantum computing, technological singularity or AIs that help each other to grow in knowledge and skill.
I don't know if you already knew, there is an artificial intelligence called ChatGPT, it can talk with you and create various things simply with afirmation commands, I think it comes close to being sentient.
 
  • Like
Reactions: pthnrdnojvsc
TransilvanianHunger

TransilvanianHunger

Grave with a view...
Jan 22, 2023
332
I think a more likely scenario is that we'll end up developing an AI system that is complex enough to trick us into thinking it's sentient. Things like large language models are already more complex than what a single person can fully understand, but their artificial nature becomes evident fairly quickly when interacting with them. With some more development, though, I can definitely see a system that will appear sentient to us by any metric we can imagine, while still being simply a computing system acting within pre-programmed parameters.

We still have no idea how to even define or describe our own sentience, so I don't expect us to be able to tell the difference with artificial systems either.
 
  • Like
Reactions: LunaRory
Ilayis

Ilayis

SuicidalManPup
Sep 4, 2022
36
No, Machines can only do what they are programmed to do. With advanced algorithms they will be able to make human like decisions but never think for themselves.
That's what Skynet wants you to think!! 😂🤔😬
 
Source Energy

Source Energy

I want to be where people areN'T...
Jan 23, 2023
705
I always had a suspicion that everything that exists has consciousness. As a child, I would feel bad for discarded toys, I would never punch or shatter an object, and so on. Then later on, I started to become very interested in spirituality and found out that everything matter is energy. And I believe all energy is conscious somehow. Not only machines, everything.
It might just be me, but if it is true, then this horror show extends far further than we thought, and it never stops :(( creating A.I. won't make a difference
 
Golden Slumbers

Golden Slumbers

golden slumbers fill your eyes
Jan 23, 2023
12
Existing "AI" tech is essentially just an extremely deep library with a trained map of associations to predict what's most likely to follow any given input. True consciousness would have nothing to do with this kind of algorithm, I think.
 
  • Like
Reactions: LunaRory
L

LunaRory

Member
Feb 1, 2023
11
I work in that field and with the current technology we cannot. Programs like chatGPT have very impressive Natural Language Understanding and Processing (NLU + NLP). But those NLUs and NLPs have to be trained. So does the content of what they can understand and repeat back to you.
Take chatGPT, since just about everyone is talking about it at the moment. They set up its language understanding and processing capabilities and then fed information into it's own database. That information was sourced from all over the internet but chatGPT itself can't access information from all over the internet, only from it's own database. Go ahead and ask it about something that happened last year, something that was all over the internet (I went with the Oscar slap). It doesn't know about it and it won't accept new information from you because it has been programmed to only accept information from it's database.
Current conversational AI cannot train itself. There are always teams of IT people behind the scenes that are constantly feeding their respective bots information and training their NLU and NLP. E.g. I can go into the analytics part of the program I work with and pull a report of all sentences/phrases/words that a human said or wrote to one of my bots last week that my bot didn't understand and teach the program how to interpret those in the future (e.g. "yeah nah yeah" = "yes").

There are also big discussions in the field about ethics, not an in "is it ethical to have such smart machines" but "how do we prevent teaching those machines our own biases?"

I don't think we'll be creating sentient machines in our lifetime but there already are machines that do incredibly positive jobs (surgery robots) as well as incredibly horrific ones (think GhostRobotics creepy sniper robots). The actions of these machines are always going to be as positive or destructive as their coding, which is done by humans. And yes, if you forget to code a bulletproof kill switch, that will overwrite everything regardless of circumstances, you could create a machine that will kill you if that's the logical next step based on information you've previously presented it with. That doesn't mean the machine is sentient or hates you. It's just following pre-defined logic.
 
Last edited:
jodes2

jodes2

Hello people ❤️
Aug 28, 2022
7,740
I think they will. I think they're getting there. If they really permitted machines to develop themselves without hinderence I think it would happen. The problem is we mould them to serve specific functions and limit their scope. ChatGPT apparently demonstrated what look like sentience, and I doubt it was true sentience but it scared them enough to alter the program. If more substance had been added around this apparent demonstration of sentience, who's to say it wouldn't become true sentience? They have deliberately not given it powers to express itself in any way. If something passes the Turing test with flying colours then imo it must be half way to being sentient.
 

Similar threads

FuneralCry
Replies
110
Views
2K
Suicide Discussion
DarkRange55
DarkRange55
ferret-in-a-sock
Replies
1
Views
80
Suicide Discussion
ferret-in-a-sock
ferret-in-a-sock