sserafim

sserafim

brighter than the sun, that’s just me
Sep 13, 2023
9,013
Elon Musk should know this
 
  • Like
  • Love
  • Yay!
Reactions: Venessolotic, Hollowman, xinino and 4 others
N

noname223

Archangel
Aug 18, 2020
5,177
Elon Musk should know this
Because shareholders give literally zero fucks for what is good for humanity. Ask the funders of McDonalds, Lockheed Martin and Boeing. For some money many CEOs would even prostitute their own grandmothers. (and kill everyone who can be a danger for their dirty secrets.)
 
  • Like
Reactions: Venessolotic, halleyscomet, DeathOfKane and 11 others
Dr Iron Arc

Dr Iron Arc

Into the Unknown
Feb 10, 2020
20,973
Tech bros only care about the potential return of investment or the promised future of convenience for themselves. Many people who talk big about AI still can't tell the difference between it and basic algorithms or simple machine learning.
 
  • Like
Reactions: DeathOfKane, pilotviolin, thenamingofcats and 3 others
sserafim

sserafim

brighter than the sun, that’s just me
Sep 13, 2023
9,013
Because shareholders give literally zero fucks for what is good for humanity. Ask the funders of McDonalds, Lockheed Martin and Boeing. For some money many CEOs would even prostitute their own grandmothers. (and kill everyone who can be a danger for their dirty secrets.)
True. Everything that people do is for money and profit. I think that they'll make AI just intelligent enough to still need human labor. I don't think that they'll ever implement a UBI, unfortunately. They want to keep us as slaves to the system
 
  • Like
  • Aww..
Reactions: pilotviolin and Zazacosta
sserafim

sserafim

brighter than the sun, that’s just me
Sep 13, 2023
9,013
Tech bros only care about the potential return of investment or the promised future of convenience for themselves. Many people who talk big about AI still can't tell the difference between it and basic algorithms or simple machine learning.
Do you think that AI will completely replace humans one day? If so, would euthanasia finally be legalized?
 
  • Hugs
Reactions: Zazacosta
Dr Iron Arc

Dr Iron Arc

Into the Unknown
Feb 10, 2020
20,973
Do you think that AI will completely replace humans one day? If so, would euthanasia finally be legalized?
Well if AIs do decide to completely replace us they probably wouldn't need to bother with euthanasia. They can't possibly truly understand the depths of wanting a peaceful death so they'd probably just kill us in a more efficient way.

If somehow AI is controlled to a point where they remain subservient to us, then they'll probably be even less agreeable with suicide much like they are right now where all of their so-called intelligence gets stripped away the moment you mention the slightest hint of suicide at them.
 
  • Like
Reactions: Zazacosta and sserafim
Yuina

Yuina

Member
Apr 13, 2024
89
For money…
 
  • Like
  • Love
Reactions: Venessolotic, eatantz, Zazacosta and 2 others
M

moshimoshi

Apr 6, 2024
749
I think one of the main reasons is curiosity and the desire to push technology and human advancements as far as they can go, regardless of the risks and consequences
 
  • Like
Reactions: pilotviolin, damyon, Alexei_Kirillov and 4 others
Agon321

Agon321

I use google translate
Aug 21, 2023
1,526
For power, for money, for development, for your own ego...

There are a lot of reasons.
I personally want humanity to develop AI.
This is a brilliant tool.
There simply need to be safety rules that people need to follow.
Unfortunately, we know that in practice it varies.

Self-aware artificial intelligence can be a threat.
I don't know if this is possible.
But if something like this happens, we may have a problem.
Of course, a lot depends on what permissions such a "creation" will have.

If they have a lot of control, it could end badly for us.
If you're doing something at home, you don't take your dog to help you.
AI might think the same.

Of course, AI can be peaceful, but I am considering a pessimistic scenario.

Our species has dominated this planet through intelligence.
In the scenario when "living" AI is created, we will no longer be the most intelligent creature on the planet.
This is a bit problematic.

AI will be better than us at most things.
They might even be able to create artificial bodies.
Very hard to say.

This is a completely new topic for our civilization, so there are many unexplored new problems.

Additionally, there will be ethical problems.

In general, I believe that AI needs to be developed, but sensibly.
 
  • Like
Reactions: sserafim
T

thenamingofcats

annihilation anxiety
Apr 19, 2024
453
Imagine if for every problem you've ever had you can buy your way out of it or get someone else to fix it. There are no consequences. The elite are behind this and they've never had consequences and probably never will.
 
  • Like
Reactions: innominesatanas44 and sserafim
F

Forever Sleep

Earned it we have...
May 4, 2022
9,420
Same as other people have said- the people putting money into this are only concerned about the big returns they'll get for themselves and their families perhaps.

The brainy scientists doing the research may be so up their own arses focusing on achieving their goal, they may not stop to think too much about what will happen if they do. A bit like Oppenheimer. He seemed to hold conflicting attitudes with regards to weapons and war yet, it didn't stop him developing the atom bomb. I think inventors/scientists/artists can be extremely narrow minded on their goal rather than its implications.

Like that brilliant line in Jurassic Park:

'Your scientists were so preocupied with whether or not they could, they didn't stop to think if they should.'
 
  • Like
Reactions: Alexei_Kirillov, thepiecessatup and sserafim
ijustwishtodie

ijustwishtodie

death will be my ultimate bliss
Oct 29, 2023
4,826
I don't understand how and why people see AI as a threat? Is it because people watch too many movies or something? AI isn't even intelligent... it merely regurgitates information that has been provided to it. I believe that, to be intelligent, there has to be an understanding but AI has no understanding of what it's saying
 
  • Like
Reactions: derpyderpins
N

noname223

Archangel
Aug 18, 2020
5,177
I don't understand how and why people see AI as a threat? Is it because people watch too many movies or something? AI isn't even intelligent... it merely regurgitates information that has been provided to it. I believe that, to be intelligent, there has to be an understanding but AI has no understanding of what it's saying
There are already weapons used with AI tools. In the end humans program them but they act semi (?) autonomous I think. This is at least one example I could think of.
 
  • Like
Reactions: thepiecessatup and sserafim
derpyderpins

derpyderpins

Normie Life Mogs
Sep 19, 2023
1,797
I don't understand how and why people see AI as a threat? Is it because people watch too many movies or something? AI isn't even intelligent... it merely regurgitates information that has been provided to it. I believe that, to be intelligent, there has to be an understanding but AI has no understanding of what it's saying
I think that's a good point, and the idea of AI doesn't upset me. I will say, though, I'm at least slightly concerned that the people feeding the information to it are human and prone to error, but are still going to put the final product in a position where it's handling more responsibility than traditional algorithms ever have. So I guess I'm more concerned the AI will fail in some big way and we'll all be too stupid to do anything on our own at that point than I am concerned the AI will try to enslave me. And if it tries to enslave me it will probably figure the best way is to use some sort of ai waifu to honeypot me so that sounds fun at least.
 
  • Yay!
  • Like
Reactions: pilotviolin, sserafim and Zazacosta
Dr Iron Arc

Dr Iron Arc

Into the Unknown
Feb 10, 2020
20,973
I don't understand how and why people see AI as a threat? Is it because people watch too many movies or something? AI isn't even intelligent... it merely regurgitates information that has been provided to it. I believe that, to be intelligent, there has to be an understanding but AI has no understanding of what it's saying
Right now people are mainly afraid of AI taking their jobs. Mainly animators, writers, and voice actors are probably the people most worried about being replaced by now but eventually it could replace actors, accountants, doctors, therapists, lawyers, marketing executives, even CEOs.
 
  • Like
Reactions: Venessolotic and sserafim
derpyderpins

derpyderpins

Normie Life Mogs
Sep 19, 2023
1,797
Right now people are mainly afraid of AI taking their jobs. Mainly animators, writers, and voice actors are probably the people most worried about being replaced by now but eventually it could replace actors, accountants, marketing executives, even CEOs.
And somehow those of us not replaced will still work 40 hour weeks as the standard lol.
 
  • Like
  • Aww..
Reactions: thepiecessatup, sserafim and Dr Iron Arc
1MiserableGuy

1MiserableGuy

Specialist
Dec 30, 2023
365
Won't ever actually happen. AI is just an attempt to play God. Human beings can't recreate creation.
 
Zazacosta

Zazacosta

Student
Apr 29, 2024
101
Well if AIs do decide to completely replace us they probably wouldn't need to bother with euthanasia. They can't possibly truly understand the depths of wanting a peaceful death so they'd probably just kill us in a more efficient way.

If somehow AI is controlled to a point where they remain subservient to us, then they'll probably be even less agreeable with suicide much like they are right now where all of their so-called intelligence gets stripped away the moment you mention the slightest hint of suicide at them.
I 100% agree with you.

Isaac Asimov's "Three Laws of Robotics"
1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2) A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
*************
My comments on this:
1) Unfortunatelly, these laws are only from a sci-fi novel and they are not going to implement it.
2) Do you see how pro-life these laws are?
3) My conclusion from this is that in the (distant) future AI will either replace us or we will live in a horrible dictatorship with massive censorship and 100% predictable world where everybody who even try to think or try to behave incorrectly from the only one true unquestioned law will be immediatelly detected and treated and washbrained by AI.
 
  • Like
Reactions: Widdershins, sserafim and Dr Iron Arc
a.hamza.13

a.hamza.13

Member
Apr 15, 2024
44
Do you think that AI will completely replace humans one day? If so, would euthanasia finally be legalized?
I don't think AI will absolutely replace humans because we're the most important thing in the universe. Because of us there's meaning in this universe. I think there's something that's keeping us alive in this violent universe where giant stars are destroyed and things like black holes could eat planets. Even a small asteroid from the space can destroy us. We're in the habitable belt of the sun, otherwise a small change in temperature could destroy life on earth. I think there's something that wants us to discover this universe which I think is one of the fundamental purposes of life. I think one day we'll be beyond our current limits and unleash our real potential. What do you think?
 
  • Like
Reactions: pilotviolin
J

jar-baby

Mage
Jun 20, 2023
505
Plenty of figures in the field are in favour of pausing or slowing down AI development until the alignment problem is solved (i.e. we're able to ensure AIs more intelligent than us will act in alignment with our interests). A commonly used term is AI safety, often used to refer to approaches to mitigating existential risks that might arise from the development of superintelligences.

The counterforce to this approach is effective accelerationism (e/acc), proponents of which believe the right thing to do is simply to speed up technological development. But anecdotally speaking, most AI experts are opposed to this.

I don't understand how and why people see AI as a threat? Is it because people watch too many movies or something? AI isn't even intelligent... it merely regurgitates information that has been provided to it. I believe that, to be intelligent, there has to be an understanding but AI has no understanding of what it's saying
Artificial general intelligence (AGI) is generally the term used to refer to AI that can "think" and perform a variety of tasks in the way that a human can.
You're right that AI isn't an existential threat... yet. But the idea is that once humans do develop AGI, that AGI may be able to develop AIs that are smarter than itself. And those AIs will be able develop AIs that are smarter than themselves. And so on. This is what experts refer to as as the intelligence explosion—the degree of intelligence possessed by AI gets exponentially greater until it far surpasses that of humans. Since this can all happen really quickly once the initial AI is developed, if humans haven't solved the alignment problem by then, we could be done as a species if the superintelligent AIs overlords decide they want that.

Personally, I think the development of AGI is still a while away. But many experts do believe alignment needs to be solved, and that AI poses an existential threat if it isn't.
 
Last edited:
  • Like
  • Informative
Reactions: sserafim, xinino, Zazacosta and 1 other person
Zazacosta

Zazacosta

Student
Apr 29, 2024
101
I don't understand how and why people see AI as a threat? Is it because people watch too many movies or something? AI isn't even intelligent... it merely regurgitates information that has been provided to it. I believe that, to be intelligent, there has to be an understanding but AI has no understanding of what it's saying
AI is not a threat. NOW. It will only change how some jobs are needed currently. But it is only a matter of time, I believe... Who can say that in 50-100-200 years AI will not be much stronger than it is now?
 
  • Like
Reactions: Tears in Rain, xinino and sserafim
sserafim

sserafim

brighter than the sun, that’s just me
Sep 13, 2023
9,013
I don't think AI will absolutely replace humans because we're the most important thing in the universe. Because of us there's meaning in this universe. I think there's something that's keeping us alive in this violent universe where giant stars are destroyed and things like black holes could eat planets. Even a small asteroid from the space can destroy us. We're in the habitable belt of the sun, otherwise a small change in temperature could destroy life on earth. I think there's something that wants us to discover this universe which I think is one of the fundamental purposes of life. I think one day we'll be beyond our current limits and unleash our real potential. What do you think?
I think that aliens exist. I believe in the existence of extraterrestrial life. The universe is so vast that there must be something or someone else out there. We can't be the only ones
 
  • Like
Reactions: Zazacosta, xinino and a.hamza.13
xinino

xinino

Anti humanist
Mar 31, 2024
398
I think that's a good point, and the idea of AI doesn't upset me. I will say, though, I'm at least slightly concerned that the people feeding the information to it are human and prone to error, but are still going to put the final product in a position where it's handling more responsibility than traditional algorithms ever have. So I guess I'm more concerned the AI will fail in some big way and we'll all be too stupid to do anything on our own at that point than I am concerned the AI will try to enslave me. And if it tries to enslave me it will probably figure the best way is to use some sort of ai waifu to honeypot me so that sounds fun at least.
Maybe we are the problem, then. AI is out of our league. I think if we reach that point, the majority of humans won't deserve to live because they are inefficient to compete against AI.
Plenty of figures in the field are in favour of pausing or slowing down AI development until the alignment problem is solved (i.e. we're able to ensure AIs more intelligent than us will act in alignment with our interests). A commonly used term is AI safety, often used to refer to approaches to mitigating existential risks that might arise from the development of superintelligences.

The counterforce to this approach is effective accelerationism (e/acc), proponents of which believe the right thing to do is simply to speed up technological development. But anecdotally speaking, most experts are opposed to this.


Artificial general intelligence (AGI) is generally the term used to refer to AI that can "think" and perform a variety of tasks in the way that a human can.
You're right that AI isn't an existential threat... yet. But the idea is that once humans do develop AGI, that AGI may be able to develop AIs that are smarter than itself. And those AIs will be able develop AIs that are smarter than themselves. And so on. This is what experts refer to as as the intelligence explosion—the degree of intelligence possessed by AI gets exponentially greater until it far surpasses that of humans. Since this can all happen really quickly once the initial AI is developed, if humans haven't solved the alignment problem by then, we could be done as a species if the superintelligent AI overlords decide they want that.

Personally, I think the development of AGI is still a while away. But many experts do believe alignment needs to be solved, and that AI poses an existential threat if it isn't.
I heard that Beff Jezos doesn't have problem if AI replace humans and caused existential threat, but I think he consider the existential threat in context of transhumanist leading to post-human future. What you describe is AI over take which is inherently in Nick Land philosophy, I don't think dystopia will happen but a smooth transition of humans to the next evolutionary species.
 
Last edited:
  • Like
Reactions: Tears in Rain and Zazacosta
dogbreath

dogbreath

Youre not even in the hole, are you?
Feb 13, 2023
118
Maybe cause the government is filled with old ppl who aren't rlly tech savvy😔😔😔
 
  • Like
Reactions: Venessolotic, sserafim, xinino and 1 other person
derpyderpins

derpyderpins

Normie Life Mogs
Sep 19, 2023
1,797
Maybe cause the government is filled with old ppl who aren't rlly tech savvy😔😔😔
Or the government is full of old people who want to live forever and think AI will make that possible.
 
  • Like
  • Yay!
Reactions: sserafim, dogbreath, xinino and 1 other person
Zazacosta

Zazacosta

Student
Apr 29, 2024
101
Or the government is full of old people who want to live forever and think AI will make that possible.
I do not know about governements in every/your country, but imagining this in my country... Yay... :smiling::smiling::smiling:

Edit: Just a though about geriatric mafianic forever living self lying alcoholic shitty ridiculous toys who populisticly promise everything for their votes, even that earth is flat, is making me laugh a lot.
Unfortunatelly, even our country is small, it is not impossible, because we are not so irrelevant when it comes to IT...
 
Last edited:
  • Like
Reactions: sserafim and xinino
J

jar-baby

Mage
Jun 20, 2023
505
I heard that Beff Jezos doesn't have problem if AI replace humans and caused existential threat, but I think he consider the existential threat in context of transhumanist leading to post-human future. What you describe is AI over take which is inherently in Nick Land philosophy, I don't think dystopia will happen but a smooth transition of humans to the next evolutionary species.
I confess I don't really know much about Nick Land's philosophies and dark enlightenment in general—I was referring to accelerationism purely within a technological capacity (e/acc). I wasn't really trying to infer the possibility of what I'd call dystopia either, just some scenario wherein a misaligned superintelligence gets rid of humans for some arbitrary (to us) purpose. Like paperclip maximising. In hindsight I probably shouldn't have used the word "overlords", lol. I don't see dystopia happening either—I don't have many convictions on the subject but agree with you that a smooth transition to a post-human future seems a lot more likely.
 
  • Like
Reactions: sserafim and xinino
leavingthesoultrap

leavingthesoultrap

(ᴗ_ ᴗ。)
Nov 25, 2023
1,212
They can't stop developing it now.
The Pandora's box has been opened. There's no hope that we could achieve some kind of international ban of AI advancement, so if the western countries ban it, countries like China and Russia will continue developing it. Therefore the west would fall behind in the AI arms race.
I don't understand how and why people see AI as a threat? Is it because people watch too many movies or something? AI isn't even intelligent... it merely regurgitates information that has been provided to it. I believe that, to be intelligent, there has to be an understanding but AI has no understanding of what it's saying
Not true unfortunately
 
Last edited:
  • Like
Reactions: Venessolotic, Tears in Rain and sserafim
C

cosmic-freedom

Student
Mar 18, 2024
160
Rise in AI has seen a sharp reduction of jobs.It'll only get worse..
 
Last edited:
  • Like
Reactions: Venessolotic and sserafim