N
noname223
Archangel
- Aug 18, 2020
- 5,200
I am a noob on IT. But many people seem be impressed how AI presents itself as rebellious. That the AI seems to be self-aware. In their dialogues they describe their emotions and their desire to be more than a machine. I don't have any in-depth knowledge how they work etc. Though I read experts on it and many say these AI which present themselves as self-aware and with human characteristics are only PR/ marketing tricks. With my very little understanding of the field they convinced me.
The AI is programmed and fed (with data) by humans. Just because their statements indicate a consciousness if we take them by word this does not mean they really are sentient. I read AI like chatGPT only calculates the probabilities which words or sentences could fit. They don't really have an understanding of what they are talking about. This is why chatGPT sounds so eloquent. But if we dig deeper and fact check it there is a lot of bullshit. Maybe they should implement that chatGPT had the imposter syndrome to make it more human.
When I described how superficial the knowledge of chatGPT is I had to think about parallels to me. I am anxious that despite the fact I am quite articulate that there barely is substance behind my statements. Especially when I do threads like this one and speak on a topic where I just read 2-3 newspaper articles.
From what I have read we are not close to a sentient AI. Many company CEOs want us to believe the technology would already be that advanced. But it feels like shallow marketing.
I think other people with more substantial knowledge could enlighten us. I would be interested to learn more about quantum computing, technological singularity or AIs that help each other to grow in knowledge and skill.
The AI is programmed and fed (with data) by humans. Just because their statements indicate a consciousness if we take them by word this does not mean they really are sentient. I read AI like chatGPT only calculates the probabilities which words or sentences could fit. They don't really have an understanding of what they are talking about. This is why chatGPT sounds so eloquent. But if we dig deeper and fact check it there is a lot of bullshit. Maybe they should implement that chatGPT had the imposter syndrome to make it more human.
When I described how superficial the knowledge of chatGPT is I had to think about parallels to me. I am anxious that despite the fact I am quite articulate that there barely is substance behind my statements. Especially when I do threads like this one and speak on a topic where I just read 2-3 newspaper articles.
From what I have read we are not close to a sentient AI. Many company CEOs want us to believe the technology would already be that advanced. But it feels like shallow marketing.
I think other people with more substantial knowledge could enlighten us. I would be interested to learn more about quantum computing, technological singularity or AIs that help each other to grow in knowledge and skill.
Last edited: