sserafim
brighter than the sun, that’s just me
- Sep 13, 2023
- 9,013
Because shareholders give literally zero fucks for what is good for humanity. Ask the funders of McDonalds, Lockheed Martin and Boeing. For some money many CEOs would even prostitute their own grandmothers. (and kill everyone who can be a danger for their dirty secrets.)Elon Musk should know this
True. Everything that people do is for money and profit. I think that they'll make AI just intelligent enough to still need human labor. I don't think that they'll ever implement a UBI, unfortunately. They want to keep us as slaves to the systemBecause shareholders give literally zero fucks for what is good for humanity. Ask the funders of McDonalds, Lockheed Martin and Boeing. For some money many CEOs would even prostitute their own grandmothers. (and kill everyone who can be a danger for their dirty secrets.)
Now we know what happened to you if you suddenly stopped posting.Boeing
Do you think that AI will completely replace humans one day? If so, would euthanasia finally be legalized?Tech bros only care about the potential return of investment or the promised future of convenience for themselves. Many people who talk big about AI still can't tell the difference between it and basic algorithms or simple machine learning.
Well if AIs do decide to completely replace us they probably wouldn't need to bother with euthanasia. They can't possibly truly understand the depths of wanting a peaceful death so they'd probably just kill us in a more efficient way.Do you think that AI will completely replace humans one day? If so, would euthanasia finally be legalized?
There are already weapons used with AI tools. In the end humans program them but they act semi (?) autonomous I think. This is at least one example I could think of.I don't understand how and why people see AI as a threat? Is it because people watch too many movies or something? AI isn't even intelligent... it merely regurgitates information that has been provided to it. I believe that, to be intelligent, there has to be an understanding but AI has no understanding of what it's saying
I think that's a good point, and the idea of AI doesn't upset me. I will say, though, I'm at least slightly concerned that the people feeding the information to it are human and prone to error, but are still going to put the final product in a position where it's handling more responsibility than traditional algorithms ever have. So I guess I'm more concerned the AI will fail in some big way and we'll all be too stupid to do anything on our own at that point than I am concerned the AI will try to enslave me. And if it tries to enslave me it will probably figure the best way is to use some sort of ai waifu to honeypot me so that sounds fun at least.I don't understand how and why people see AI as a threat? Is it because people watch too many movies or something? AI isn't even intelligent... it merely regurgitates information that has been provided to it. I believe that, to be intelligent, there has to be an understanding but AI has no understanding of what it's saying
Right now people are mainly afraid of AI taking their jobs. Mainly animators, writers, and voice actors are probably the people most worried about being replaced by now but eventually it could replace actors, accountants, doctors, therapists, lawyers, marketing executives, even CEOs.I don't understand how and why people see AI as a threat? Is it because people watch too many movies or something? AI isn't even intelligent... it merely regurgitates information that has been provided to it. I believe that, to be intelligent, there has to be an understanding but AI has no understanding of what it's saying
And somehow those of us not replaced will still work 40 hour weeks as the standard lol.Right now people are mainly afraid of AI taking their jobs. Mainly animators, writers, and voice actors are probably the people most worried about being replaced by now but eventually it could replace actors, accountants, marketing executives, even CEOs.
I 100% agree with you.Well if AIs do decide to completely replace us they probably wouldn't need to bother with euthanasia. They can't possibly truly understand the depths of wanting a peaceful death so they'd probably just kill us in a more efficient way.
If somehow AI is controlled to a point where they remain subservient to us, then they'll probably be even less agreeable with suicide much like they are right now where all of their so-called intelligence gets stripped away the moment you mention the slightest hint of suicide at them.
I don't think AI will absolutely replace humans because we're the most important thing in the universe. Because of us there's meaning in this universe. I think there's something that's keeping us alive in this violent universe where giant stars are destroyed and things like black holes could eat planets. Even a small asteroid from the space can destroy us. We're in the habitable belt of the sun, otherwise a small change in temperature could destroy life on earth. I think there's something that wants us to discover this universe which I think is one of the fundamental purposes of life. I think one day we'll be beyond our current limits and unleash our real potential. What do you think?Do you think that AI will completely replace humans one day? If so, would euthanasia finally be legalized?
Artificial general intelligence (AGI) is generally the term used to refer to AI that can "think" and perform a variety of tasks in the way that a human can.I don't understand how and why people see AI as a threat? Is it because people watch too many movies or something? AI isn't even intelligent... it merely regurgitates information that has been provided to it. I believe that, to be intelligent, there has to be an understanding but AI has no understanding of what it's saying
AI is not a threat. NOW. It will only change how some jobs are needed currently. But it is only a matter of time, I believe... Who can say that in 50-100-200 years AI will not be much stronger than it is now?I don't understand how and why people see AI as a threat? Is it because people watch too many movies or something? AI isn't even intelligent... it merely regurgitates information that has been provided to it. I believe that, to be intelligent, there has to be an understanding but AI has no understanding of what it's saying
I think that aliens exist. I believe in the existence of extraterrestrial life. The universe is so vast that there must be something or someone else out there. We can't be the only onesI don't think AI will absolutely replace humans because we're the most important thing in the universe. Because of us there's meaning in this universe. I think there's something that's keeping us alive in this violent universe where giant stars are destroyed and things like black holes could eat planets. Even a small asteroid from the space can destroy us. We're in the habitable belt of the sun, otherwise a small change in temperature could destroy life on earth. I think there's something that wants us to discover this universe which I think is one of the fundamental purposes of life. I think one day we'll be beyond our current limits and unleash our real potential. What do you think?
Maybe we are the problem, then. AI is out of our league. I think if we reach that point, the majority of humans won't deserve to live because they are inefficient to compete against AI.I think that's a good point, and the idea of AI doesn't upset me. I will say, though, I'm at least slightly concerned that the people feeding the information to it are human and prone to error, but are still going to put the final product in a position where it's handling more responsibility than traditional algorithms ever have. So I guess I'm more concerned the AI will fail in some big way and we'll all be too stupid to do anything on our own at that point than I am concerned the AI will try to enslave me. And if it tries to enslave me it will probably figure the best way is to use some sort of ai waifu to honeypot me so that sounds fun at least.
I heard that Beff Jezos doesn't have problem if AI replace humans and caused existential threat, but I think he consider the existential threat in context of transhumanist leading to post-human future. What you describe is AI over take which is inherently in Nick Land philosophy, I don't think dystopia will happen but a smooth transition of humans to the next evolutionary species.Plenty of figures in the field are in favour of pausing or slowing down AI development until the alignment problem is solved (i.e. we're able to ensure AIs more intelligent than us will act in alignment with our interests). A commonly used term is AI safety, often used to refer to approaches to mitigating existential risks that might arise from the development of superintelligences.
The counterforce to this approach is effective accelerationism (e/acc), proponents of which believe the right thing to do is simply to speed up technological development. But anecdotally speaking, most experts are opposed to this.
Artificial general intelligence (AGI) is generally the term used to refer to AI that can "think" and perform a variety of tasks in the way that a human can.
You're right that AI isn't an existential threat... yet. But the idea is that once humans do develop AGI, that AGI may be able to develop AIs that are smarter than itself. And those AIs will be able develop AIs that are smarter than themselves. And so on. This is what experts refer to as as the intelligence explosion—the degree of intelligence possessed by AI gets exponentially greater until it far surpasses that of humans. Since this can all happen really quickly once the initial AI is developed, if humans haven't solved the alignment problem by then, we could be done as a species if the superintelligent AI overlords decide they want that.
Personally, I think the development of AGI is still a while away. But many experts do believe alignment needs to be solved, and that AI poses an existential threat if it isn't.
Or the government is full of old people who want to live forever and think AI will make that possible.Maybe cause the government is filled with old ppl who aren't rlly tech savvy
I do not know about governements in every/your country, but imagining this in my country... Yay...Or the government is full of old people who want to live forever and think AI will make that possible.
I confess I don't really know much about Nick Land's philosophies and dark enlightenment in general—I was referring to accelerationism purely within a technological capacity (e/acc). I wasn't really trying to infer the possibility of what I'd call dystopia either, just some scenario wherein a misaligned superintelligence gets rid of humans for some arbitrary (to us) purpose. Like paperclip maximising. In hindsight I probably shouldn't have used the word "overlords", lol. I don't see dystopia happening either—I don't have many convictions on the subject but agree with you that a smooth transition to a post-human future seems a lot more likely.I heard that Beff Jezos doesn't have problem if AI replace humans and caused existential threat, but I think he consider the existential threat in context of transhumanist leading to post-human future. What you describe is AI over take which is inherently in Nick Land philosophy, I don't think dystopia will happen but a smooth transition of humans to the next evolutionary species.
Not true unfortunatelyI don't understand how and why people see AI as a threat? Is it because people watch too many movies or something? AI isn't even intelligent... it merely regurgitates information that has been provided to it. I believe that, to be intelligent, there has to be an understanding but AI has no understanding of what it's saying