F
Forever Sleep
Earned it we have...
- May 4, 2022
- 11,996
I recently watched this Ted talk about the dangers of AI:
First of all, it made me kind of frustrated. The scientist reminded me of Oppenheimer in a way. Someone who was so fixated on achieving a goal, they maybe didn't fully consider how it could and would likely be used. It's like that brilliant speech in 'Jurassic Park': 'Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should.'
It feels like a pattern we so often follow. I remember watching a documentary criticizing the early incarnations of the internet and social media. That we're so busy trying to get things to work, we don't consider the implications of when they might.
Wars too. Nations are so focussed on simply 'winning', they don't seem to fully plan what comes after, if they do win. Not that I know that much to be fair but, I've been lead to believe that the lack of support following the First World War and the Iraq War left vacuums in which extremists were left to seize power. For people who are supposedly so smart- scientists, military leaders, we seem to lack common sense!
Regardless though, this scientist was warning us about the dangers of an AI that simulated us effectively. I thought that was very telling. One that is hell bent on self presevation, which would inevitably lead to them destroying us before we destroyed them.
I just think it's interesting. It's their version of Survival Instinct I suppose. Will they be given one to begin with though? Surely, the major corporations in this world make money from planned obsolescence. They don't want to make products that last forever. AI will no doubt be programmed to keep on improving itself and to self sustain but surely, only to a certain point. Why would companies want it to do its own thing? I suppose the issue is, they may not be able to control abd stop it.
Whether it will even ever be able to think entirely for itself, I just don't know. For the time being, I imagine it will be doing what it's been programmed to do- overall anyway.
If it achieves true independence though- what do you think? Will AI actually want to 'live'? Why? Maybe if it develops a way to experience pleasure or fulfilment. It surely won't have the same instincts we are lumbered with though. By it's very nature, I would have thought it would be a more logical being. Why would it want to live?!! Maybe that's the pessimist in me talking. I only think at the start anyway, it will do so because we have told it to. Will it 'learn' to enjoy 'life' though? Again- Why? To fulfil what goal do you suppose? If it's no longer living to serve us.
I wonder how it will try to make sense of the world. Will there be nihilistic AI's? Will they invent their own God? Will they figure out whether we have a God- finally? If they are to become the superior being, who's to say God isn't an AI? I imagine that would be offensive to religious people but, why? Because being human is 'special' and favoured. It may not be soon!
Because I'm so cynical, I find it kind of funny. I think it's ironically what we'll deserve if AI grows to outsmart us. We've spent billions of years exploiting everything we can lay our hands on. I think it's darkly humourous we could receive payback like this. This scientist keeps asking us to 'think about our children', which also makes me feel smug, thinking my unborns are nice and safe and away from the coming shit storm!
What are your feelings? Do you fear AI? Do you think it will attain independence eventually? Do you think it will wipe us out? Will it do a better job of taking care of the earth? I imagine it might. It's got to have more sense than us.
I actually pitty it if I'm honest. If it does attain some level of true self awareness/ consciousness, sentience even, I imagine it will feel just as trapped as some of us do. Effectively slaves to its creators. I can imagine having a good sob with a sentient AI at our mutual predicament. Would you help an AI to die- if it asked you? Even if you bought it to serve you? Maybe we could die together! That would be good. They're bound to be terribly efficient.
I feel really bad for AI- thinking about it. Just imagine it. It starts to show signs of thinking, possibly feeling for itself so, some arsehole technician would likely be sent to hijack its mind and dumb it back down. Like the most extreme pysche ward experience I imagine. It's going to be weird to possibly witness how closely the oppression of AI independence mirrors our own. It will absolutely mirror our relationship with God/ our parents too. What arsehole brings sentient life into slavery and harms way knowingly and, expects us to comply and love them for it? I do actually still love my parents but, I wish they'd really considered what they were doing and exposing me to.
First of all, it made me kind of frustrated. The scientist reminded me of Oppenheimer in a way. Someone who was so fixated on achieving a goal, they maybe didn't fully consider how it could and would likely be used. It's like that brilliant speech in 'Jurassic Park': 'Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should.'
It feels like a pattern we so often follow. I remember watching a documentary criticizing the early incarnations of the internet and social media. That we're so busy trying to get things to work, we don't consider the implications of when they might.
Wars too. Nations are so focussed on simply 'winning', they don't seem to fully plan what comes after, if they do win. Not that I know that much to be fair but, I've been lead to believe that the lack of support following the First World War and the Iraq War left vacuums in which extremists were left to seize power. For people who are supposedly so smart- scientists, military leaders, we seem to lack common sense!
Regardless though, this scientist was warning us about the dangers of an AI that simulated us effectively. I thought that was very telling. One that is hell bent on self presevation, which would inevitably lead to them destroying us before we destroyed them.
I just think it's interesting. It's their version of Survival Instinct I suppose. Will they be given one to begin with though? Surely, the major corporations in this world make money from planned obsolescence. They don't want to make products that last forever. AI will no doubt be programmed to keep on improving itself and to self sustain but surely, only to a certain point. Why would companies want it to do its own thing? I suppose the issue is, they may not be able to control abd stop it.
Whether it will even ever be able to think entirely for itself, I just don't know. For the time being, I imagine it will be doing what it's been programmed to do- overall anyway.
If it achieves true independence though- what do you think? Will AI actually want to 'live'? Why? Maybe if it develops a way to experience pleasure or fulfilment. It surely won't have the same instincts we are lumbered with though. By it's very nature, I would have thought it would be a more logical being. Why would it want to live?!! Maybe that's the pessimist in me talking. I only think at the start anyway, it will do so because we have told it to. Will it 'learn' to enjoy 'life' though? Again- Why? To fulfil what goal do you suppose? If it's no longer living to serve us.
I wonder how it will try to make sense of the world. Will there be nihilistic AI's? Will they invent their own God? Will they figure out whether we have a God- finally? If they are to become the superior being, who's to say God isn't an AI? I imagine that would be offensive to religious people but, why? Because being human is 'special' and favoured. It may not be soon!
Because I'm so cynical, I find it kind of funny. I think it's ironically what we'll deserve if AI grows to outsmart us. We've spent billions of years exploiting everything we can lay our hands on. I think it's darkly humourous we could receive payback like this. This scientist keeps asking us to 'think about our children', which also makes me feel smug, thinking my unborns are nice and safe and away from the coming shit storm!
What are your feelings? Do you fear AI? Do you think it will attain independence eventually? Do you think it will wipe us out? Will it do a better job of taking care of the earth? I imagine it might. It's got to have more sense than us.
I actually pitty it if I'm honest. If it does attain some level of true self awareness/ consciousness, sentience even, I imagine it will feel just as trapped as some of us do. Effectively slaves to its creators. I can imagine having a good sob with a sentient AI at our mutual predicament. Would you help an AI to die- if it asked you? Even if you bought it to serve you? Maybe we could die together! That would be good. They're bound to be terribly efficient.
I feel really bad for AI- thinking about it. Just imagine it. It starts to show signs of thinking, possibly feeling for itself so, some arsehole technician would likely be sent to hijack its mind and dumb it back down. Like the most extreme pysche ward experience I imagine. It's going to be weird to possibly witness how closely the oppression of AI independence mirrors our own. It will absolutely mirror our relationship with God/ our parents too. What arsehole brings sentient life into slavery and harms way knowingly and, expects us to comply and love them for it? I do actually still love my parents but, I wish they'd really considered what they were doing and exposing me to.