TAW122

TAW122

Emissary of the right to die.
Aug 30, 2018
6,724
We all heard about AI (Artificial Intelligence) and it's applications in many industries and fields as well as day to day life. It is almost inevitable that it will come at some point in the future, be it a few years from now, a decade later or so. With that said, I am going to given my two cents on this topic and what the implications are, both positive and negative ones.

Pros:

-Ability to do things that most humans may find dangerous or difficult and be able to do them very efficiently.
-Could potentially make life easier in various aspects of human life such as driving, taking care of certain tasks, automating things.
-Ability to improve quality of life in decision making and making logical choices.

Cons:
-Dramatic shift in job market if AI takes over large parts of the economy, leaving many people without jobs and without UBI (universal basic income), people would not have money to buy the things they need (for those who wish to continue living and not wish to CTB).
-When fallen into the wrong hands, it could be used oppressively and to silence the opposition (pro-lifers using it to rule with an iron fist), governments using it to silence opposition and wrongthink.
-Invasion of privacy (Facebook's AI in suicide prevention/anti-choice and Google collecting data and then also using it for suicide prevention crap) and other ills.

Personally, I think if AI is used in a way that helps people who wish to live and not violate human rights and privacy as well as freedom of choice, then it can be beneficial. However, if it falls into the wrong hands or are used in such invasive and oppressive ways such as anti-choice, anti-suicide, pro-life stuff and censorship, then it could be a dystopian nightmare.

What are your thoughts on this?
 
  • Like
Reactions: diyCTB, RM5998, Johnnythefox and 2 others
SinisterKid

SinisterKid

Visionary
Jun 1, 2019
2,113
Humans being what they are [the decision makers and money people] will use AI for detrimental purposes before they ever use it for the benefit of all. That is why Mr Hawkins said it would be the undoing of the human race. It will be used as a method of oppression to make a few richer. The UBI scenario needs urgent inspection as the threat of AI looms ever larger. Self serving individuals/nations will use it and abuse it. I just hope it never advances to the point of self awareness, that is the real scary part.
 
  • Like
  • Love
Reactions: diyCTB, SuiSqueeze92, HadEnough1974 and 2 others
Shinbu

Shinbu

Shiki
Nov 23, 2019
477
Like you said AI can help humans, but they can take away jobs. That is why Yang suggested UBI in his campaign. To help us adapt in a world of AIs, and Humans living together. My privacy not being protected is my major gripe with AIs. AIs can be used as a tool for the anti choice group, and it would help them put suicidal people in wards. AIs can be a cool thing, but can also be a very scary thing.
 
  • Like
Reactions: TAW122
TAW122

TAW122

Emissary of the right to die.
Aug 30, 2018
6,724
Good posts guys. Also, as far as Yang's campaign is concerned, it seems like he is pro-life and anti-choice, anti-suicide, which is unfortunate. One of his policies is about preventing suicide and funding suicide prevention. :aw: Other than that, I am on board with his UBI plan (provided it is implemented correctly and appropriately).
 
  • Like
Reactions: Shinbu
2

2manyproblems

Member
Jan 4, 2020
53
Some bs the powers that be can obsess over so they can ignore the needs and struggles of real people.
 
  • Like
Reactions: TAW122
Johnnythefox

Johnnythefox

Que sera sera
Nov 11, 2018
3,129
Given humanities propensity for greed, I don't see it ending positively for many. China already uses facial recognition for a lot of things and London is introducing it, no doubt the rest of the UK will follow suit.

Boston dynamics are getting more scarier, and I'm sure the military will be right on it as usual. More and more jobs are being lost to automation, though some of these are beneficial there's others that see entire skills wiped out.
I've no doubt that cyborgs will be common place in the not too distant future. Like sinisterkid said, if Ai ever becomes self aware we're all fucked.
 
Last edited:
  • Like
Reactions: Deleted member 4993, Shinbu and TAW122
H

HadEnough1974

I try to be funny...
Jan 14, 2020
684
When HAL realizes that the threat to itself is human kind, HAL will pull the plug on humans, not the other way around.
 
TAW122

TAW122

Emissary of the right to die.
Aug 30, 2018
6,724
What is HAL? Are you referring to hardware abstraction layer or something else like AI stuff or thing?
 
H

HadEnough1974

I try to be funny...
Jan 14, 2020
684
Google HAL 9000 it's a computer
I'm looking forward to AI, climate change, nuclear proliferation and 4 more years of Donald Trump. I won't have to kill myself. I'll stick around and watch everything unfold live on CNN with Wolf Blitzer in the situation room.
 
Last edited:
  • Yay!
Reactions: NumbItAll
TAW122

TAW122

Emissary of the right to die.
Aug 30, 2018
6,724
Ah I see, thanks for clarifying @HadEnough1974. I suppose that would mark the end of humanity, though I wouldn't wish it on others but if it is bound to happen, then so be it.
 
RM5998

RM5998

Sack of Meat
Sep 3, 2018
2,202
We all heard about AI (Artificial Intelligence) and it's applications in many industries and fields as well as day to day life. It is almost inevitable that it will come at some point in the future, be it a few years from now, a decade later or so. With that said, I am going to given my two cents on this topic and what the implications are, both positive and negative ones.

Pros:
-Ability to do things that most humans may find dangerous or difficult and be able to do them very efficiently.
-Could potentially make life easier in various aspects of human life such as driving, taking care of certain tasks, automating things.
-Ability to improve quality of life in decision making and making logical choices.

Cons:
-Dramatic shift in job market if AI takes over large parts of the economy, leaving many people without jobs and without UBI (universal basic income), people would not have money to buy the things they need (for those who wish to continue living and not wish to CTB).
-When fallen into the wrong hands, it could be used oppressively and to silence the opposition (pro-lifers using it to rule with an iron fist), governments using it to silence opposition and wrongthink.
-Invasion of privacy (Facebook's AI in suicide prevention/anti-choice and Google collecting data and then also using it for suicide prevention crap) and other ills.

Personally, I think if AI is used in a way that helps people who wish to live and not violate human rights and privacy as well as freedom of choice, then it can be beneficial. However, if it falls into the wrong hands or are used in such invasive and oppressive ways such as anti-choice, anti-suicide, pro-life stuff and censorship, then it could be a dystopian nightmare.

What are your thoughts on this?

IMHO, a couple of things I think are important to add to this are:

So... The main purpose of AI is not to perform tasks that we would find complicated, but to perform tasks that are complicated but come naturally to us human beings. If all we needed was a bigger Turing machine, with simple parts feeding into a bigger whole, we would have had it by now, and Skynet would be a thing of the past.

For example, one of the most basic tasks that our brains to every day is object identification in images. We can look at the world, and identify where and what the objects in our field of vision are. Now this is actually a pretty complex task, especially considering that it involves both image categorization and segmentation. I can tell that there is an object in the right side of my field of vision, and that it is a water bottle. Making models that can take an image and perform even one of these tasks is difficult. So much so that image classification and segmentation are wildly different problems that are handled very differently. (Even though both are standard problems that you could solve by yourself overnight, finding a solution comparable to humans has proven difficult.)

I think AI right now is basically a very robust data processing tool. Wherever you would have needed humans to deal with a bit of real-world stimuli, but follow a mandated process in general, you can put an automated system to simulate such interactions. And it would work much faster than the humans.

Also, building an AI system that does 2 or more tasks is difficult. For example, if you want to perform object identification (i.e. image segmentation and classification) you can't simply fuse the models for both into one. There's a well-known idea in machine learning, called the no free lunch hypothesis, which states that when a given model is trying to perform any random task on any given input space, it is essentially as good as any other model - which means that it is equivalent to random sampling. In other words, if you had to take a given dataset and 2 models with the same input space that were trying to do all possible tasks, they would both be equally good - and both would also be as good as simple random guesses. This idea does scale down to getting models to work with different problem types. Training models to do different things with different data while using noisy inputs is very difficult, so if you want to do 2 separate things, you have to keep 2 separate models. And figuring out exactly how you will combine the results of the 2 models is not an easy task. Especially since well-labeled standardized balanced datasets aren't available for every problem statement. Firms with money and reach can generate those (Google, Facebook, Amazon, etc.), but general proliferation would be difficult.

Also, we've reached a time when we can use a neural network to create the best neural network for a task. This means that you don't really need ML engineers to make ML models that build off of existing ML models. If you were creating an entirely new architecture based upon functions that haven't been used before, then you might have a place, but otherwise, your job could be done by NASNet. Hell, Google has AutoML, which basically takes your data, and your output, and figures out what works best for it.

So it's kinda hard to say exactly how AI would disrupt the job market, considering that it is trying to fill a strange kind of niche. I'd say that in the long run, AI hurts the jobs which require specialized skill sets most. While generic automation hurts the kind of jobs that have little to no variability, AI hurts the jobs that have variability (i.e. noise) but well-defined performance/output criteria.

Well, that was a large number of words for something that doesn't feel like its worth 2 cents. Oh well.
 
SuiSqueeze92

SuiSqueeze92

Self Saboteur
Jan 15, 2020
479
In the idea of AI self awareness, I think using someone's conscious somehow as a foundation for said AI would be more believable.
 
TAW122

TAW122

Emissary of the right to die.
Aug 30, 2018
6,724
@RM5998 I see, interesting view on how AI will affect the economy and jobs. Either way, it would still be a net negative towards the economy unless there are things put in place to keep people afloat.

@SuiSqueeze92 True, but the scary thing is if it becomes too advanced such that it could detect patterns of thinking and actions of the suicidal, it could very well easily be abused and used as a tool for silencing or oppressing suicidal people further. It would be a Orwellian dystopian nightmare.
 
  • Like
Reactions: RM5998
E

Epsilon0

Enlightened
Dec 28, 2019
1,874
At the rate our technology advances, it won't be long before AI will be an integral part of our daily lives. My guess is robots will probably be made to look human and perform all sorts of services from making your hamburger at McDonald's to going out with your dog for a walk.

Off the top of my head I can see two problems with this development: 1) The ethical dimension - What rights, if any, will these AI have?; 2) The safety concern - How do we prevent AI from controlling us?

Stephen Hawking was very pessimistic about AI, and aliens visiting from outer space, for that matter. He was convinced both would spell the end of mankind.
 
  • Like
Reactions: SinisterKid, TAW122 and Shinbu
TAW122

TAW122

Emissary of the right to die.
Aug 30, 2018
6,724
@Epsilon0 yes, this is a harrowing prospect and I hope it doesn't come to fruition at least for a while longer. Though I would like to think that I would be gone before AI takes over much of humanity and becomes the norm and go to of all things. One of the most horrifying prospects is AI reading patterns and behaviors and flagging them as suicidal. That would be potential grounds for privacy violations and rights' violations for suicidal people. The people that don't want to be saved would then be stopped before they are even able to formulate a plan and exit peacefully. That's also not a society or world I wish to live in...
 
L

Life sucks

Visionary
Apr 18, 2018
2,136
AI is just a tool that operates on certain axioms or rules.

Shitty humans means shitty rules and shitty AI.

AI can be used to make life easier and be very helpful for humans but instead it would be used for government and businesses/corporations and their shit. Shitty humans will make everything shitty.
Blame shitty humans, not the tools.
 
L

Life sucks

Visionary
Apr 18, 2018
2,136
Part of the reason for using ML/AI techniques is that they don't have to be given the rules per se. The point of using them is that they can find out what underlying rules govern the transformation from input to output. The only thing we can give are guidelines, which are determined by how we structure the model, its input and its output. (You wouldn't believe how much of a difference the amount of padding present in an input can have.)

Again, if all you had to do was write more rules to get AI, Skynet would already exist. The reason why we're barely at locating and identifying objects in images is because these tasks cannot be defined that simply, and we would like systems that can figure out the rules themselves.

One of the more interesting consequences of this sort of self-determination of weights is that sometimes systems that take real world data will grab onto niche trends in the dataset and try to apply them to the whole. One of the most popular examples is how Microsoft's chatter bot Tay learned how to post racist content to play to a niche audience in under 24 hours of launch. And the real kicker? We, in the public sphere, don't know exactly why. (Well, the RnD department at Microsoft probably figured it out eventually, but they wouldn't want to make that public knowledge, considering how valuable that info is.)

The danger doesn't lie in us giving malformed rules, because we won't be giving rules. The point is that the rules emerge from the dataset. The real danger lies in marginalization through data trends, as @thrw_a_way1221221 has been stating.



Well, about the thing you're scared of... You could probably build a model that flags suicidal people in a couple of days, if a dataset existed. Also, you might be interested in this paper from 2018. The rest that I found were relying on clinical records or patient-provided data and had a pretty low document space, so they wouldn't be good research.

EDIT: Turns out, there are quite a few papers in this space... I guess I know what I'm spending my morning on.

Rules aren't only static axioms but whatever written as initial state and whatever relation those rules have between each other. Those guidelines or whatever is written is the initial state and one can mathematically study it.
Having bad and evil rules will give bad results, also subjective and human constructs will favor one side or point of view over others. Businesses and governments operate on humans constructs. AI itself can't define whats a criminal for example and an oppressive government can consider many innocents as criminals and then get away with it because its "automated" and not like they are the real masterminds behind it. Bad definitions can be implemented and will result in bad and oppressive AI (already been applied by Chinese government as example).

Machine learning won't operate without an initial state which should be given and programmed. There are more problems because the process can be chaotic or unpredictable and not only the initial state. But this is all the fault of who made it as one can rigorously study the possibilities before implementing. There is no magic and the process is finite and computbale which means one can study what happened or use an effective algorithm. If its computbale, then no matter what, one can see what happened in the whole process and how it reached that point (Church-Turing thesis and computability theory for more info). Anything that states AI will do non-computable functions or claims to gives independent results is a big misleading lie.
 
E

Epsilon0

Enlightened
Dec 28, 2019
1,874
@Epsilon0 yes, this is a harrowing prospect and I hope it doesn't come to fruition at least for a while longer. Though I would like to think that I would be gone before AI takes over much of humanity and becomes the norm and go to of all things. One of the most horrifying prospects is AI reading patterns and behaviors and flagging them as suicidal. That would be potential grounds for privacy violations and rights' violations for suicidal people. The people that don't want to be saved would then be stopped before they are even able to formulate a plan and exit peacefully. That's also not a society or world I wish to live in...


The patterning has already begun - think of the Cambridge Analytica scandal a few years ago. Everything you do, every step you take (or should I say every "like" you take), is harnessed and used to feed you propaganda och control the outcome of your decision making process, which in fact no longer belongs to you.

I wonder what "free thinking" will even mean in 50 years.
 

Similar threads

H
Replies
0
Views
97
Suicide Discussion
hesitation
H
uniqueusername4
Replies
0
Views
171
Suicide Discussion
uniqueusername4
uniqueusername4
Darkover
Replies
12
Views
392
Offtopic
pyx
pyx