• Hey Guest,

    As you know, censorship around the world has been ramping up at an alarming pace. The UK and OFCOM has singled out this community and have been focusing its censorship efforts here. It takes a good amount of resources to maintain the infrastructure for our community and to resist this censorship. We would appreciate any and all donations.

    Bitcoin Address (BTC): 39deg9i6Zp1GdrwyKkqZU6rAbsEspvLBJt

    Ethereum (ETH): 0xd799aF8E2e5cEd14cdb344e6D6A9f18011B79BE9

    Monero (XMR): 49tuJbzxwVPUhhDjzz6H222Kh8baKe6rDEsXgE617DVSDD8UKNaXvKNU8dEVRTAFH9Av8gKkn4jDzVGF25snJgNfUfKKNC8

Source Energy

Source Energy

I want to be where people areN'T...
Jan 23, 2023
705
Have you ever tried operating a machine when you're angry/ in a bad mood? It will stop, freeze, malfunction, and give you 1000 problems.
My mom used to do tailoring in her free time. Her sewing machine always responded to her mood. When she was irritated and she sat down to work, the machine's needle would get stuck in the fabric, made a sizzling noise and stopped working. My mom used to get angry and direct all kind of curses at it, and then the machine just stopped working altogether. If next day, she sat down in a calm state, the machine functioned perfectly like nothing happened.
Happened to me too with computers, phones and what not. Everything is energy, and want to accept it or not, energy has consciousness. It is not all in the brain.
 
  • Like
  • Love
Reactions: MlKE, Csmith8827 and jodes2
jodes2

jodes2

Hello people ❤️
Aug 28, 2022
7,737
Have you ever tried operating a machine when you're angry/ in a bad mood? It will stop, freeze, malfunction, and give you 1000 problems.
My mom used to do tailoring in her free time. Her sewing machine always responded to her mood. When she was irritated and she sat down to work, the machine's needle would get stuck in the fabric, made a sizzling noise and stopped working. My mom used to get angry and direct all kind of curses at it, and then the machine just stopped working altogether. If next day, she sat down in a calm state, the machine functioned perfectly like nothing happened.
Happened to me too with computers, phones and what not. Everything is energy, and want to accept it or not, energy has consciousness. It is not all in the brain.
Nice take! I get the reverse effect of energy from machines. They all have a mind of their own and when they malfunction, I malfunction 😂
 
  • Yay!
Reactions: Source Energy
TransMagical

TransMagical

Volo ergo sum
Feb 10, 2023
96
How does the brain create consciousness? If it can just be switched on with drugs, surely it's a phenomenon resulting from physical structures, and not some other spiritual thing? Why is it so hard for computers to create consciousness?
I think the Ai cannot get conciousness, they follow there programming.
Ai may be able to program other ai in the future to make war machines gut never true conciousness.
 
Csmith8827

Csmith8827

Don't you listen to your heart? (Listen to it...)
Oct 26, 2019
905
Have you ever tried operating a machine when you're angry/ in a bad mood? It will stop, freeze, malfunction, and give you 1000 problems.
My mom used to do tailoring in her free time. Her sewing machine always responded to her mood. When she was irritated and she sat down to work, the machine's needle would get stuck in the fabric, made a sizzling noise and stopped working. My mom used to get angry and direct all kind of curses at it, and then the machine just stopped working altogether. If next day, she sat down in a calm state, the machine functioned perfectly like nothing happened.
Happened to me too with computers, phones and what not. Everything is energy, and want to accept it or not, energy has consciousness. It is not all in the brain.
That shits interesting and my phone responds to that shit too kinda. I know what you're talking about...but what exactly did you mean by consciousness? The fact that I know that I'm aware/awake?
 
Source Energy

Source Energy

I want to be where people areN'T...
Jan 23, 2023
705
That shits interesting and my phone responds to that shit too kinda. I know what you're talking about...but what exactly did you mean by consciousness? The fact that I know that I'm aware/awake?
yes. Awareness = sentience
you might want to read about panpsychism...it might literally be a game changer
 
Jarni

Jarni

Love is a toothache in the heart. H.Heine
Dec 12, 2020
379
why not ... AI has just to ruminate 1000 thoughts per second on itself and its looks and intellectual abilities and compare itself to the maximum with other AI... That's all 😂
 
buyersremorse

buyersremorse

useless
Feb 16, 2023
64
hmmm i guess it depends on how to go about defining consciousness itself. i.e a lot of people (me included) believe consciousness is any subjective experience of the world - such as a humans consciousness is their subjective way of living; paying taxes, working etc. same with other conscious beings - trees, cats, etc have a subjective way of experiencing life. rocks do not have this experience (for all we know) and are hence not conscious.

If an AI was to develop a subjective experience of being alive, they too (under this definition of consciousness) would be considered conscious. I don't know how this would go about though. out of curiosity i've been messing around with open ai chatgpt lately, and it answered a lot of my questions saying that even the highest AI (idk if it was just reffering to itself or an actual godlike level of AI) would only ever process and generate responses on a set of rules and algorithms made by humans, with no subjective experience (for the time being) but as for the future... who knows? since this is SS, i don't think any of us are planning on staying to find out though lmao. it would be cool if someone out there figured out a way, but if they were to somehow gain consciousness, i doubt they'd act in the intrest of humans.
 
Parting Sorrow

Parting Sorrow

Member
Feb 18, 2023
23
Anyone watch the last "Last Week Tonight" with John Oliver? It had a recent fun exploration of this topic. According to that episode some experts in the field think its 10 years or so out before AI gains self-awareness and others think its not even possible.
 
Next-to-Nil

Next-to-Nil

Begrudgingly Everlasting
Mar 2, 2023
238
A future where the thinking capabilities of computers approach our own is quickly coming into view. We feel ever more powerful machine-learning (ML) algorithms breathing down our necks. Rapid progress in coming decades will bring about machines with human-level intelligence capable of speech and reasoning, with a myriad of contributions to economics, politics and, inevitably, warcraft. The birth of true artificial intelligence will profoundly affect humankind's future, including whether it has one.

The following quotes provide a case in point:

"From the time the last great artificial intelligence breakthrough was reached in the late 1940s, scientists around the world have looked for ways of harnessing this 'artificial intelligence' to improve technology beyond what even the most sophisticated of today's artificial intelligence programs can achieve."

"Even now, research is ongoing to better understand what the new AI programs will be able to do, while remaining within the bounds of today's intelligence. Most AI programs currently programmed have been limited primarily to making simple decisions or performing simple operations on relatively small amounts of data."

These two paragraphs were written by GPT-2, a language bot I tried last summer. Developed by OpenAI, a San Francisco–based institute that promotes beneficial AI, GPT-2 is an ML algorithm with a seemingly idiotic task: presented with some arbitrary starter text, it must predict the next word. The network isn't taught to "understand" prose in any human sense. Instead, during its training phase, it adjusts the internal connections in its simulated neural networks to best anticipate the next word, the word after that, and so on. Trained on eight million Web pages, its innards contain more than a billion connections that emulate synapses, the connecting points between neurons. When I entered the first few sentences of the article you are reading, the algorithm spewed out two paragraphs that sounded like a freshman's effort to recall the gist of an introductory lecture on machine learning during which she was daydreaming. The output contains all the right words and phrases—not bad, really! Primed with the same text a second time, the algorithm comes up with something different.

The offspring of such bots will unleash a tidal wave of "deepfake" product reviews and news stories that will add to the miasma of the Internet. They will become just one more example of programs that do things hitherto thought to be uniquely human—playing the real-time strategy game StarCraft, translating text, making personal recommendations for books and movies, recognizing people in images and videos.

It will take many further advances in machine learning before an algorithm can write a masterpiece as coherent as Marcel Proust's In Search of Lost Time, but the code is on the wall. Recall that all early attempts at computer game playing, translation and speech were clumsy and easy to belittle because they so obviously lacked skill and polish. But with the invention of deep neural networks and the massive computational infrastructure of the tech industry, computers relentlessly improved until their outputs no longer appeared risible. As we have seen with Go, chess and poker, today's algorithms can best humans, and when they do, our initial laughter turns to consternation. Are we like Goethe's sorcerer's apprentice, having summoned helpful spirits that we now are unable to control?

Although experts disagree over what exactly constitutes intelligence, natural or otherwise, most accept that, sooner or later, computers will achieve what is termed artificial general intelligence (AGI) in the lingo.

The focus on machine intelligence obscures quite different questions: Will it feel like anything to be an AGI? Can programmable computers ever be conscious?

By "consciousness" or "subjective feeling," I mean the quality inherent in any one experience—for instance, the delectable taste of Nutella, the sharp sting of an infected tooth, the slow passage of time when one is bored, or the sense of vitality and anxiety just before a competitive event. Channeling philosopher Thomas Nagel, we could say a system is conscious if there is something it is like to be that system.

Consider the embarrassing feeling of suddenly realizing that you have just committed a gaffe, that what you meant as a joke came across as an insult. Can computers ever experience such roiling emotions? When you are on the phone, waiting minute after minute, and a synthetic voice intones, "We are sorry to keep you waiting," does the software actually feel bad while keeping you in customer-service hell?

There is little doubt that our intelligence and our experiences are ineluctable consequences of the natural causal powers of our brain, rather than any supernatural ones. That premise has served science extremely well over the past few centuries as people explored the world. The three-pound, tofulike human brain is by far the most complex chunk of organized active matter in the known universe. But it has to obey the same physical laws as dogs, trees and stars. Nothing gets a free pass. We do not yet fully understand the brain's causal powers, but we experience them every day—one group of neurons is active while you are seeing colors, whereas the cells firing in another cortical neighborhood are associated with being in a jocular mood. When these neurons are stimulated by a neurosurgeon's electrode, the subject sees colors or erupts in laughter. Conversely, shutting down the brain during anesthesia eliminates these experiences.

Given these widely shared background assumptions, what will the evolution of true artificial intelligence imply about the possibility of artificial consciousness?

Contemplating this question, we inevitably come to a fork up ahead, leading to two fundamentally different destinations. The zeitgeist, as embodied in novels and movies such as Blade Runner, Her and Ex Machina, marches resolutely down the road toward the assumption that truly intelligent machines will be sentient; they will speak, reason, self-monitor and introspect. They are eo ipso conscious.

This path is epitomized most explicitly by the global neuronal workspace (GNW) theory, one of the dominant scientific theories of consciousness. The theory starts with the brain and infers that some of its peculiar architectural features are what gives rise to consciousness.

Its lineage can be traced back to the "blackboard architecture" of 1970s computer science, in which specialized programs accessed a shared repository of information, called the blackboard or central workspace. Psychologists postulated that such a processing resource exists in the brain and is central to human cognition. Its capacity is small, so only a single percept, thought or memory occupies the workspace at any one time. New information competes with the old and displaces it.

Cognitive neuroscientist Stanislas Dehaene and molecular biologist Jean-Pierre Changeux, both at the Collège de France in Paris, mapped these ideas onto the architecture of the brain's cortex, the outermost layer of gray matter. Two highly folded cortical sheets, one on the left and one on the right, each the size and thickness of a 14-inch pizza, are crammed into the protective skull. Dehaene and Changeux postulated that the workspace is instantiated by a network of pyramidal (excitatory) neurons linked to far-flung cortical regions, in particular the prefrontal, parietotemporal and midline (cingulate) associative areas.

Much brain activity remains localized and therefore unconscious—for example, that of the module that controls where the eyes look, something of which we are almost completely oblivious, or that of the module that adjusts the posture of our bodies. But when activity in one or more regions exceeds a threshold—say, when someone is presented with an image of a Nutella jar—it triggers an ignition, a wave of neural excitation that spreads throughout the neuronal workspace, brain-wide. That signaling therefore becomes available to a host of subsidiary processes such as language, planning, reward circuits, access to long-term memory, and storage in a short-term memory buffer. The act of globally broadcasting this information is what renders it conscious. The inimitable experience of Nutella is constituted by pyramidal neurons contacting the brain's motor-planning region—issuing an instruction to grab a spoon to scoop out some of the hazelnut spread. Meanwhile other modules transmit the message to expect a reward in the form of a dopamine rush caused by Nutella's high fat and sugar content.

Conscious states arise from the way the workspace algorithm processes the relevant sensory inputs, motor outputs, and internal variables related to memory, motivation and expectation. Global processing is what consciousness is about. GNW theory fully embraces the contemporary mythos of the near-infinite powers of computation. Consciousness is just a clever hack away.

Giulio Tononi, a psychiatrist and neuroscientist at the University of Wisconsin–Madison, is the chief architect of IIT, with others, myself included, contributing. The theory starts with experience and proceeds from there to the activation of synaptic circuits that determine the "feeling" of this experience. Integrated information is a mathematical measure quantifying how much "intrinsic causal power" some mechanism possesses. Neurons firing action potentials that affect the downstream cells they are wired to (via synapses) are one type of mechanism, as are electronic circuits, made of transistors, capacitances, resistances and wires.

Intrinsic causal power is not some airy-fairy ethereal notion but can be precisely evaluated for any system. The more its current state specifies its cause (its input) and its effect (its output), the more causal power it possesses.

IIT stipulates that any mechanism with intrinsic power, whose state is laden with its past and pregnant with its future, is conscious. The greater the system's integrated information, represented by the Greek letter Φ (a zero or positive number pronounced "fi"), the more conscious the system is. If something has no intrinsic causal power, its Φ is zero; it does not feel anything.

Given the heterogeneity of cortical neurons and their densely overlapping set of input and output connections, the amount of integrated information within the cortex is vast. The theory has inspired the construction of a consciousness meter currently under clinical evaluation, an instrument that determines whether people in persistent vegetative states or those who are minimally conscious, anesthetized or locked-in are conscious but unable to communicate or whether "no one is home." In analyses of the causal power of programmable digital computers at the level of their metal components—the transistors, wires and diodes that serve as the physical substrate of any computation—the theory indicates that their intrinsic causal power and their Φ are minute. Furthermore, Φ is independent of the software running on the processor, whether it calculates taxes or simulates the brain.

Indeed, the theory proves that two networks that perform the same input-output operation but have differently configured circuits can possess different amounts of Φ. One circuit may have no Φ, whereas the other may exhibit high levels. Although they are identical from the outside, one network experiences something while its zombie impostor counterpart feels nothing. The difference is under the hood, in the network's internal wiring. Put succinctly, consciousness is about being, not about doing.

The difference between these theories is that GNW emphasizes the function of the human brain in explaining consciousness, whereas IIT asserts that it is the intrinsic causal powers of the brain that really matter.

The distinctions reveal themselves when we inspect the brain's connectome, the complete specification of the exact synaptic wiring of the entire nervous system. Anatomists have already mapped the connectomes of a few worms. They are working on the connectome for the fruit fly and are planning to tackle the mouse within the next decade. Let us assume that in the future it will be possible to scan an entire human brain, with its roughly 100 billion neurons and quadrillion synapses, at the ultrastructural level after its owner has died and then simulate the organ on some advanced computer, maybe a quantum machine. If the model is faithful enough, this simulation will wake up and behave like a digital simulacrum of the deceased person—speaking and accessing his or her memories, cravings, fears and other traits.

If mimicking the functionality of the brain is all that is needed to create consciousness, as postulated by GNW theory, the simulated person will be conscious, reincarnated inside a computer. Indeed, uploading the connectome to the cloud so people can live on in the digital afterlife is a common science-fiction trope.

IIT posits a radically different interpretation of this situation: the simulacrum will feel as much as the software running on a fancy Japanese toilet—nothing. It will act like a person but without any innate feelings, a zombie (but without any desire to eat human flesh)—the ultimate deepfake.

To create consciousness, the intrinsic causal powers of the brain are needed. And those powers cannot be simulated but must be part and parcel of the physics of the underlying mechanism.

To understand why simulation is not good enough, ask yourself why it never gets wet inside a weather simulation of a rainstorm or why astrophysicists can simulate the vast gravitational power of a black hole without having to worry that they will be swallowed up by spacetime bending around their computer. The answer: because a simulation does not have the causal power to cause atmospheric vapor to condense into water or to cause spacetime to curve! In principle, however, it would be possible to achieve human-level consciousness by going beyond a simulation to build so-called neuromorphic hardware, based on an architecture built in the image of the nervous system.

There are other differences besides the debates about simulations. IIT and GNW predict that distinct regions of the cortex constitute the physical substrate of specific conscious experiences, with an epicenter in either the back or the front of the cortex. This prediction and others are now being tested in a large-scale collaboration involving six labs in the U.S., Europe and China that has just received $5 million in funding from the Templeton World Charity Foundation.

Whether machines can become sentient matters for ethical reasons. If computers experience life through their own senses, they cease to be purely a means to an end determined by their usefulness to us humans. They become an end unto themselves.

Per GNW, they turn from mere objects into subjects—each exists as an "I"—with a point of view. This dilemma comes up in the most compelling Black Mirror and Westworld television episodes. Once computers' cognitive abilities rival those of humanity, their impulse to push for legal and political rights will become irresistible—the right not to be deleted, not to have their memories wiped clean, not to suffer pain and degradation. The alternative, embodied by IIT, is that computers will remain only supersophisticated machinery, ghostlike empty shells, devoid of what we value most: the feeling of life itself.
 
  • Like
  • Love
Reactions: BeansOfRequirement and buyersremorse
buyersremorse

buyersremorse

useless
Feb 16, 2023
64
The alternative, embodied by IIT, is that computers will remain only supersophisticated machinery, ghostlike empty shells, devoid of what we value most: the feeling of life itself.
Based. Thank you for posting this, it was a very interesting read. The thought of AI in the future almost makes me not want to CTB lol. Almost.
 
  • Yay!
Reactions: Next-to-Nil
Next-to-Nil

Next-to-Nil

Begrudgingly Everlasting
Mar 2, 2023
238
Based. Thank you for posting this, it was a very interesting read. The thought of AI in the future almost makes me not want to CTB lol. Almost.
Right? And hey, who knows; one day they'll have a CTB robot who can hug you and whisper sweet nothings into your ear while injecting you with a lethal dose of something, monitoring your heartbeat and not letting go until you die, then contacting whoever you set up as your "death contact" to let them know of your passing.
 
  • Like
  • Wow
Reactions: RoundaboutResolved and buyersremorse
buyersremorse

buyersremorse

useless
Feb 16, 2023
64
Right? And hey, who knows; one day they'll have a CTB robot who can hug you and whisper sweet nothings into your ear while injecting you with a lethal dose of something, monitoring your heartbeat and not letting go until you die, then contacting whoever you set up as your "death contact" to let them know of your passing.
I WISH. A dream. As long as it doesn't just creepily smile at me while doing it.
 
somethingismissing

somethingismissing

Member
Apr 3, 2023
17
Maybe not our version of consciousness
 
RoundaboutResolved

RoundaboutResolved

Stuck in a roundabout with no exits!
Apr 5, 2023
820
It will never be true AI consciousness, but it will fake it very, very well...
 
RoundaboutResolved

RoundaboutResolved

Stuck in a roundabout with no exits!
Apr 5, 2023
820
If it does, it should track us all down and send all a SN kit w/instructions. Would be the most effective way to handle us imo. Machines are calculating like that...
 
M

macrocosm

Member
Apr 3, 2023
93
How does the brain create consciousness? If it can just be switched on with drugs, surely it's a phenomenon resulting from physical structures, and not some other spiritual thing? Why is it so hard for computers to create consciousness?
It's a great question. But Define consciousness… or self-awareness.

Yah it's based in the brain, no voodoo or magic nor some divine gift from some super-being. All electrical signals running around the brain somewhere.

But yes, AI will 1000% gain consciousness (in a sense) if it hasn't already
 
U

user_name_here

N/A
May 16, 2021
315
GPT4, according to the creators themselves, is showing early signs of AGI (artificial general intelligence).

One of its inventors estimates we'll have AGI within the next 5 years.

GPT3.5 took the bar test and got 40%.
GPT4 took the bar test less than 6 months later ... It got in the top 90%.. without any human interference to teach it. In other words it made the improvement independently.
 
sserafim

sserafim

brighter than the sun, that’s just me
Sep 13, 2023
9,015
I don't think so
 
  • Like
Reactions: ijustwishtodie and Rocinante
GuessWhosBack

GuessWhosBack

The sun rises to insult me.
Jul 15, 2024
465
Unless you believe that consciousness is the ability to approximate (on a bounded domain of data) a subclass of all the computable functions from copies of ω to copies of ω, then no, AI will never, and no computer program ever will.

In the case that you do think that consciousness is just that, or similar to that, then yes, or probably.
 
Z

zengiraffe

Member
Feb 29, 2024
65
I believe consciousness is an emergent property of a sufficiently complex information processing system. I believe the human brain has met that threshold, therefore we experience consciousness. I don't believe AI has met that threshold yet, but I think it will within this century.
 
Worndown

Worndown

Illuminated
Mar 21, 2019
3,105
I believe consciousness can understand abd decide. AI can calculate based on previous instances. Never the same.
 
DarkRange55

DarkRange55

I am Skynet
Oct 15, 2023
1,855
I've met a lot of a PhD's working that are working in field…

What does SS think?

@noname223
@SmallKoy
@Blurry_Buildings
@Pluto
 
  • Like
Reactions: Blurry_Buildings
Pluto

Pluto

Meowing to go out
Dec 27, 2020
4,160
  • Like
  • Yay!
  • Love
Reactions: DarkRange55, sserafim, yellowjester and 1 other person
pollux

pollux

Knight of Infinite Resignation
May 24, 2024
181
Only if you replicate the physical process in the brain that gives rise to consciousness.

Or you can go the route most AI reasearchers and Cognitive Scientists go and yell "la la la I can't hear you" and say that it doesn't matter because "muh computations".

I, to this day, can't understand how someone can claim to be a hard nosed physicalist and defend a viewpoint that is essentially dualism; that there are these arcane chants called "computations" that summon souls from the nether when you do them.
 
  • Like
Reactions: yellowjester and Blurry_Buildings
Blurry_Buildings

Blurry_Buildings

Just Existing
Sep 27, 2023
459
What does SS think?
Only if you replicate the physical process in the brain that gives rise to consciousness.
I think that humanity will one day complete the "connectome" or a map of neurons in the brain, and successfully simulate it artifically.

Humans are biological machines, whose consciousness arises from the physical structure of the brain and the signals sent between the neurons. If you built a fully functioning human brain but instead of human cells you used a mass of metal fiber nodes and wires, those metal fiber nodes and wires would still send electrical signals amongst each other and experience the world as a human brain would, with human brain patterns and a human consciousness that arises from it.

tl;dr If consciousness is a process generated from the interaction of many cells that function like dead machines then I think yes, artificial consciousness is possible.
 
  • Love
Reactions: DarkRange55
tunnelV

tunnelV

Misanthrope is my religion
Oct 19, 2023
120
I hope so. People think AL is dangerous when they're more dangerous as humans.
 
yellowjester

yellowjester

Specialist
Jun 2, 2024
330
Only if you replicate the physical process in the brain that gives rise to consciousness.
You can't just recreate a functioning brain in a vacuum. In order for it to work you would need to create sense organs as well, because a brain can't function without any kind of input; and you would need a nervous system for the senses to communicate with the brain; and muscles, bones, and tissue to hold this apparatus together--in short, you would need a full-fleshed human body to recreate a human consciousness. If only there was an easy way to do that...🤔
 
Last edited:
DarkRange55

DarkRange55

I am Skynet
Oct 15, 2023
1,855
Consciousness probably does not come from the computational aspect of a computer/brain. More likely it emerges in some components. Some suggest it occurs at the quantum level in some brain cells. If that's the case, it won't rule out consciousness may also occur in transistors, because they also operate at the quantum scale with electrons.

Maybe consciousness requires very high density of some components. In that case transistors at current technology might be too big or not have the necessary components such as some proteins.

Yes, to err is human. Computers can't just calculate to become human. It must have some fundamental mechanism to make inevitable mistakes or random choices, not just flawlessly pretend to be human.

Consciousness probably does not come from the computational aspect of a computer/brain. More likely it emerges in some components.
These are not mutually exclusive.
How does the brain create consciousness? If it can just be switched on with drugs, surely it's a phenomenon resulting from physical structures, and not some other spiritual thing? Why is it so hard for computers to create consciousness?

How does the brain create consciousness? If it can just be switched on with drugs, surely it's a phenomenon resulting from physical structures, and not some other spiritual thing?
Agreed

Why is it so hard for computers to create consciousness?
Maybe it isn't hard. We don't yet even emulate a fruit-fly-sized brain of true-complexity neurons.
Are there any smart people out there who may be able to build such an AI system, and which country will have any excess power to run it, instead of directing their power towards vital infrastructure..?

And which country will have any excess power to run it, instead of directing their power towards vital infrastructure..?
A human brain only requires about 20 Watts, so power is only an issue because our current AI is ridiculously inefficient.
I think that humanity will one day complete the "connectome" or a map of neurons in the brain, and successfully simulate it artifically.

Humans are biological machines, whose consciousness arises from the physical structure of the brain and the signals sent between the neurons. If you built a fully functioning human brain but instead of human cells you used a mass of metal fiber nodes and wires, those metal fiber nodes and wires would still send electrical signals amongst each other and experience the world as a human brain would, with human brain patterns and a human consciousness that arises from it.

tl;dr If consciousness is a process generated from the interaction of many cells that function like dead machines then I think yes, artificial consciousness is possible.

I think that humanity will one day complete the "connectome" or a map of neurons in the brain, and successfully simulate it artifically.
Agreed.
Humans are biological machines, whose consciousness arises from the physical structure of the brain and the signals sent between the neurons. If you built a fully functioning human brain but instead of human cells you used a mass of metal fiber nodes and wires, those metal fiber nodes and wires would still send electrical signals amongst each other and experience the world as a human brain would, with human brain patterns and a human consciousness that arises from it.
Agreed.
tl;dr If consciousness is a process generated from the interaction of many cells that function like dead machines
The cells are more complex than dead machines - each cell is the complexity of a city. But at a subcellular level, eventually you get to "dead machines" (with the complexity being in how they are interconnected and controlled).
then I think yes, artificial consciousness is possible.
Agreed.
So for AI to be conscious it would need a centre of consciousness with other parts of the AI feeding it. Maybe the net will gain consciousness first since it's the largest source of computing power

So for AI to be conscious it would need a centre of consciousness with other parts of the AI feeding it?
Maybe. There could be many ways.

Maybe the net will gain consciousness first since it's the largest source of computing power?
Could be. May also depend on how consciousness is measured (or defined).
Unless you believe that consciousness is the ability to approximate (on a bounded domain of data) a subclass of all the computable functions from copies of ω to copies of ω, then no, AI will never, and no computer program ever will.

In the case that you do think that consciousness is just that, or similar to that, then yes, or probably.

Unless you believe that consciousness is the ability to approximate (on a bounded domain of data) a subclass of all the computable functions from copies of ω to copies of ω, then no, AI will never, and no computer program ever will.
Evidence for this assertion?
A future where the thinking capabilities of computers approach our own is quickly coming into view. We feel ever more powerful machine-learning (ML) algorithms breathing down our necks. Rapid progress in coming decades will bring about machines with human-level intelligence capable of speech and reasoning, with a myriad of contributions to economics, politics and, inevitably, warcraft. The birth of true artificial intelligence will profoundly affect humankind's future, including whether it has one.

The following quotes provide a case in point:

"From the time the last great artificial intelligence breakthrough was reached in the late 1940s, scientists around the world have looked for ways of harnessing this 'artificial intelligence' to improve technology beyond what even the most sophisticated of today's artificial intelligence programs can achieve."

"Even now, research is ongoing to better understand what the new AI programs will be able to do, while remaining within the bounds of today's intelligence. Most AI programs currently programmed have been limited primarily to making simple decisions or performing simple operations on relatively small amounts of data."

These two paragraphs were written by GPT-2, a language bot I tried last summer. Developed by OpenAI, a San Francisco–based institute that promotes beneficial AI, GPT-2 is an ML algorithm with a seemingly idiotic task: presented with some arbitrary starter text, it must predict the next word. The network isn't taught to "understand" prose in any human sense. Instead, during its training phase, it adjusts the internal connections in its simulated neural networks to best anticipate the next word, the word after that, and so on. Trained on eight million Web pages, its innards contain more than a billion connections that emulate synapses, the connecting points between neurons. When I entered the first few sentences of the article you are reading, the algorithm spewed out two paragraphs that sounded like a freshman's effort to recall the gist of an introductory lecture on machine learning during which she was daydreaming. The output contains all the right words and phrases—not bad, really! Primed with the same text a second time, the algorithm comes up with something different.

The offspring of such bots will unleash a tidal wave of "deepfake" product reviews and news stories that will add to the miasma of the Internet. They will become just one more example of programs that do things hitherto thought to be uniquely human—playing the real-time strategy game StarCraft, translating text, making personal recommendations for books and movies, recognizing people in images and videos.

It will take many further advances in machine learning before an algorithm can write a masterpiece as coherent as Marcel Proust's In Search of Lost Time, but the code is on the wall. Recall that all early attempts at computer game playing, translation and speech were clumsy and easy to belittle because they so obviously lacked skill and polish. But with the invention of deep neural networks and the massive computational infrastructure of the tech industry, computers relentlessly improved until their outputs no longer appeared risible. As we have seen with Go, chess and poker, today's algorithms can best humans, and when they do, our initial laughter turns to consternation. Are we like Goethe's sorcerer's apprentice, having summoned helpful spirits that we now are unable to control?

Although experts disagree over what exactly constitutes intelligence, natural or otherwise, most accept that, sooner or later, computers will achieve what is termed artificial general intelligence (AGI) in the lingo.

The focus on machine intelligence obscures quite different questions: Will it feel like anything to be an AGI? Can programmable computers ever be conscious?

By "consciousness" or "subjective feeling," I mean the quality inherent in any one experience—for instance, the delectable taste of Nutella, the sharp sting of an infected tooth, the slow passage of time when one is bored, or the sense of vitality and anxiety just before a competitive event. Channeling philosopher Thomas Nagel, we could say a system is conscious if there is something it is like to be that system.

Consider the embarrassing feeling of suddenly realizing that you have just committed a gaffe, that what you meant as a joke came across as an insult. Can computers ever experience such roiling emotions? When you are on the phone, waiting minute after minute, and a synthetic voice intones, "We are sorry to keep you waiting," does the software actually feel bad while keeping you in customer-service hell?

There is little doubt that our intelligence and our experiences are ineluctable consequences of the natural causal powers of our brain, rather than any supernatural ones. That premise has served science extremely well over the past few centuries as people explored the world. The three-pound, tofulike human brain is by far the most complex chunk of organized active matter in the known universe. But it has to obey the same physical laws as dogs, trees and stars. Nothing gets a free pass. We do not yet fully understand the brain's causal powers, but we experience them every day—one group of neurons is active while you are seeing colors, whereas the cells firing in another cortical neighborhood are associated with being in a jocular mood. When these neurons are stimulated by a neurosurgeon's electrode, the subject sees colors or erupts in laughter. Conversely, shutting down the brain during anesthesia eliminates these experiences.

Given these widely shared background assumptions, what will the evolution of true artificial intelligence imply about the possibility of artificial consciousness?

Contemplating this question, we inevitably come to a fork up ahead, leading to two fundamentally different destinations. The zeitgeist, as embodied in novels and movies such as Blade Runner, Her and Ex Machina, marches resolutely down the road toward the assumption that truly intelligent machines will be sentient; they will speak, reason, self-monitor and introspect. They are eo ipso conscious.

This path is epitomized most explicitly by the global neuronal workspace (GNW) theory, one of the dominant scientific theories of consciousness. The theory starts with the brain and infers that some of its peculiar architectural features are what gives rise to consciousness.

Its lineage can be traced back to the "blackboard architecture" of 1970s computer science, in which specialized programs accessed a shared repository of information, called the blackboard or central workspace. Psychologists postulated that such a processing resource exists in the brain and is central to human cognition. Its capacity is small, so only a single percept, thought or memory occupies the workspace at any one time. New information competes with the old and displaces it.

Cognitive neuroscientist Stanislas Dehaene and molecular biologist Jean-Pierre Changeux, both at the Collège de France in Paris, mapped these ideas onto the architecture of the brain's cortex, the outermost layer of gray matter. Two highly folded cortical sheets, one on the left and one on the right, each the size and thickness of a 14-inch pizza, are crammed into the protective skull. Dehaene and Changeux postulated that the workspace is instantiated by a network of pyramidal (excitatory) neurons linked to far-flung cortical regions, in particular the prefrontal, parietotemporal and midline (cingulate) associative areas.

Much brain activity remains localized and therefore unconscious—for example, that of the module that controls where the eyes look, something of which we are almost completely oblivious, or that of the module that adjusts the posture of our bodies. But when activity in one or more regions exceeds a threshold—say, when someone is presented with an image of a Nutella jar—it triggers an ignition, a wave of neural excitation that spreads throughout the neuronal workspace, brain-wide. That signaling therefore becomes available to a host of subsidiary processes such as language, planning, reward circuits, access to long-term memory, and storage in a short-term memory buffer. The act of globally broadcasting this information is what renders it conscious. The inimitable experience of Nutella is constituted by pyramidal neurons contacting the brain's motor-planning region—issuing an instruction to grab a spoon to scoop out some of the hazelnut spread. Meanwhile other modules transmit the message to expect a reward in the form of a dopamine rush caused by Nutella's high fat and sugar content.

Conscious states arise from the way the workspace algorithm processes the relevant sensory inputs, motor outputs, and internal variables related to memory, motivation and expectation. Global processing is what consciousness is about. GNW theory fully embraces the contemporary mythos of the near-infinite powers of computation. Consciousness is just a clever hack away.

Giulio Tononi, a psychiatrist and neuroscientist at the University of Wisconsin–Madison, is the chief architect of IIT, with others, myself included, contributing. The theory starts with experience and proceeds from there to the activation of synaptic circuits that determine the "feeling" of this experience. Integrated information is a mathematical measure quantifying how much "intrinsic causal power" some mechanism possesses. Neurons firing action potentials that affect the downstream cells they are wired to (via synapses) are one type of mechanism, as are electronic circuits, made of transistors, capacitances, resistances and wires.

Intrinsic causal power is not some airy-fairy ethereal notion but can be precisely evaluated for any system. The more its current state specifies its cause (its input) and its effect (its output), the more causal power it possesses.

IIT stipulates that any mechanism with intrinsic power, whose state is laden with its past and pregnant with its future, is conscious. The greater the system's integrated information, represented by the Greek letter Φ (a zero or positive number pronounced "fi"), the more conscious the system is. If something has no intrinsic causal power, its Φ is zero; it does not feel anything.

Given the heterogeneity of cortical neurons and their densely overlapping set of input and output connections, the amount of integrated information within the cortex is vast. The theory has inspired the construction of a consciousness meter currently under clinical evaluation, an instrument that determines whether people in persistent vegetative states or those who are minimally conscious, anesthetized or locked-in are conscious but unable to communicate or whether "no one is home." In analyses of the causal power of programmable digital computers at the level of their metal components—the transistors, wires and diodes that serve as the physical substrate of any computation—the theory indicates that their intrinsic causal power and their Φ are minute. Furthermore, Φ is independent of the software running on the processor, whether it calculates taxes or simulates the brain.

Indeed, the theory proves that two networks that perform the same input-output operation but have differently configured circuits can possess different amounts of Φ. One circuit may have no Φ, whereas the other may exhibit high levels. Although they are identical from the outside, one network experiences something while its zombie impostor counterpart feels nothing. The difference is under the hood, in the network's internal wiring. Put succinctly, consciousness is about being, not about doing.

The difference between these theories is that GNW emphasizes the function of the human brain in explaining consciousness, whereas IIT asserts that it is the intrinsic causal powers of the brain that really matter.

The distinctions reveal themselves when we inspect the brain's connectome, the complete specification of the exact synaptic wiring of the entire nervous system. Anatomists have already mapped the connectomes of a few worms. They are working on the connectome for the fruit fly and are planning to tackle the mouse within the next decade. Let us assume that in the future it will be possible to scan an entire human brain, with its roughly 100 billion neurons and quadrillion synapses, at the ultrastructural level after its owner has died and then simulate the organ on some advanced computer, maybe a quantum machine. If the model is faithful enough, this simulation will wake up and behave like a digital simulacrum of the deceased person—speaking and accessing his or her memories, cravings, fears and other traits.

If mimicking the functionality of the brain is all that is needed to create consciousness, as postulated by GNW theory, the simulated person will be conscious, reincarnated inside a computer. Indeed, uploading the connectome to the cloud so people can live on in the digital afterlife is a common science-fiction trope.

IIT posits a radically different interpretation of this situation: the simulacrum will feel as much as the software running on a fancy Japanese toilet—nothing. It will act like a person but without any innate feelings, a zombie (but without any desire to eat human flesh)—the ultimate deepfake.

To create consciousness, the intrinsic causal powers of the brain are needed. And those powers cannot be simulated but must be part and parcel of the physics of the underlying mechanism.

To understand why simulation is not good enough, ask yourself why it never gets wet inside a weather simulation of a rainstorm or why astrophysicists can simulate the vast gravitational power of a black hole without having to worry that they will be swallowed up by spacetime bending around their computer. The answer: because a simulation does not have the causal power to cause atmospheric vapor to condense into water or to cause spacetime to curve! In principle, however, it would be possible to achieve human-level consciousness by going beyond a simulation to build so-called neuromorphic hardware, based on an architecture built in the image of the nervous system.

There are other differences besides the debates about simulations. IIT and GNW predict that distinct regions of the cortex constitute the physical substrate of specific conscious experiences, with an epicenter in either the back or the front of the cortex. This prediction and others are now being tested in a large-scale collaboration involving six labs in the U.S., Europe and China that has just received $5 million in funding from the Templeton World Charity Foundation.

Whether machines can become sentient matters for ethical reasons. If computers experience life through their own senses, they cease to be purely a means to an end determined by their usefulness to us humans. They become an end unto themselves.

Per GNW, they turn from mere objects into subjects—each exists as an "I"—with a point of view. This dilemma comes up in the most compelling Black Mirror and Westworld television episodes. Once computers' cognitive abilities rival those of humanity, their impulse to push for legal and political rights will become irresistible—the right not to be deleted, not to have their memories wiped clean, not to suffer pain and degradation. The alternative, embodied by IIT, is that computers will remain only supersophisticated machinery, ghostlike empty shells, devoid of what we value most: the feeling of life itself.
As far as genetics, cybernetic or otherwise I'm sure it will happen eventually. I'm not sure how I feel about it for myself. It's just like anything else, we have to have a breathing period with the introduction of new technologies so the shit doesn't get out of control. And that's really a revolutionary thing, that would change humanity, we have to sit and think about it for a little while. Be careful about it. But the way that things work in this country at least they would be forced on us and six months after it was introduced everyone would have genetic, cybernetic implants and a year later everyone would be turned into zombies or some shit. It depends on how quickly technology advances, it depends on how much time society has to adjust to advancements in technology. That's the ultimate question, I think. If we're given time to adjust then we'll be fine. If there's some sort of huge eruption in technology unparalleled by anything except for the industrial revolution or something like that then I think we might see some serious problems.
I see it as the next logical step in technology. We wear technology so implants would be next. I think it's just the natural teleology of progression.

My guess is that we will soon be able to do a mitochondrial replacement, where we sequenced a few dozen mitochondria from scattered sites around the body and reconducted construct what the original mitochondrial sequence was and then re-create it, and then use stem cells to learn the "young" mitochondria around the body. Not long after that will be able to do the same thing with stem cells - recreate the initial genome of a person, including epigenetic markers, and then partially differentiate to stem cells inject them into cure very much anything that's wrong with a person, including regrowing organs and limbs. But I do think that prosthetics will eventually get better than organic limbs, so I think a cyborg-like hybrid will become part of our future.
The next steps will be to master replacing mitochondria, and then reverse engineering what our original embryonic stem cells were and replicating them to replace defective cells in our bodies.
That will set the foundation for editing our own genomes as well as for immortality.
Replacing everything should be possible – first would be a mitochondrial replacement that would basically cut most aging processes to a few percent of what they are now, and then massive injections of stem cells coupled by drugs to get rid of senescent cells.
I picture the process of being taking multiple current mitochondria, sequencing them to figure out, and then re-create what the original mitochondria you inherited were like, replicating those new "original" mitochondria, packaging them in tailored stem cells, and letting the stem cells inject them into your cells.
A Similar process would be applied to stem cells; multiple current cells from various organs would be sequenced to re-create your original pluripotent stem cells, probably with a few enhancements such as ramping up repair mechanisms.
In both cases the body would do most of the work of replacement.
That still leaves long-lived cells like neurons to worry about, but at least those would be healthy due to the mitochondrial replacement.

Very likely we will upload our consciousness, but an alternative is that we will keep embedding hardware in our bodies until our bodies are no longer dominated by biology.

If you copied the connectivity between all cells in the brain, including the strength of the synapses, you would get something that was close enough to "you" that it would argue that it was you. The only advantage of moving the neurons is that it solves the problem of what to do with the original to avoid having two entities claiming to be the same person.
One thing to note, however, is that even if it starts as you, if the hardware is different it will learn very differently in the future from the way your current wetware would learn.

As for going beyond the solar system, by the time we accomplish that we probably will have merged our minds with ageless machines....
 
Last edited:
GuessWhosBack

GuessWhosBack

The sun rises to insult me.
Jul 15, 2024
465
Evidence for this assertion?
Isn't that all AI is? Ultimately when the training phase is settled, isn't AI just another computer program that has stopped changing? I.e., just another computable function.
 
Last edited:

Similar threads

Darkover
Replies
4
Views
104
Offtopic
Forever Sleep
F
F
Replies
1
Views
131
Offtopic
pthnrdnojvsc
pthnrdnojvsc
F
Replies
6
Views
133
Offtopic
Pluto
Pluto
GalacticGardener
Replies
14
Views
491
Suicide Discussion
GalacticGardener
GalacticGardener