DarkRange55

DarkRange55

I am Skynet
Oct 15, 2023
1,842
This is the most simple explanation I can summarize:

GPT Generative Pre-trained Transformers, are a type of Large Language Model (LLM), which is a subset of artificial intelligence. They're layers of outputs and inputs so you have some data that comes in that gets processed mathematically and that produces an output. The output then becomes the input for another layer which does its own processing, etc. You can literally have billions of parameters. Then it kind of works its way up mathematically and gives you some kind of output which can be a probability or it could be binary or give you a range. Thats a quick overview of how it works.

For the record: A simple sensor would not be AI.

In theory computers can achieve any kind of thinking that we do.
Even traditional programming (as opposed to teaching an AI) can perform abductive logic (my mentor wrote a crude such program back in the early 1970s).
 
  • Like
  • Informative
Reactions: Praestat_Mori, katagiri83, noname223 and 3 others
cali22♡

cali22♡

Selfharm Specialist♡
Nov 11, 2023
345
That's how I would explain it:

ChatGPT is NOT real intiligence... I personally call it a "crawler" which is what it is chatgpt collects its information from the internet and gives it back to us in plain language...
 
  • Like
Reactions: Praestat_Mori, katagiri83, Forever Sleep and 2 others
Dr Iron Arc

Dr Iron Arc

Into the Unknown
Feb 10, 2020
21,154
What? You mean the artificial intelligence doesn't really love me? Oh no!
 
  • Yay!
Reactions: Praestat_Mori, Forever Sleep and NoPoint2Life
DarkRange55

DarkRange55

I am Skynet
Oct 15, 2023
1,842
That's how I would explain it:

ChatGPT is NOT real intiligence... I personally call it a "crawler" which is what it is chatgpt collects its information from the internet and gives it back to us in plain language...
Yes!

GPT is a type of processing. It's large language models.

You need a training set, a set of materials that the computer processes through to produce the output. Today the training set is the internet but typically developers use a subset. You go through word association and develop a cloud of words and when one word appears frequently in close conjunction with another word according to the training set the computer says those words must go together like "hot day" or "cold day" or whatever. Thats a quick overview. So there is no digital brain or simulated brain in the human sense behind it.

However - for the sake of argument: How different is that from how a child learns its first language? And the way many people form opinions without deep thought?
 
  • Like
  • Informative
Reactions: Praestat_Mori, katagiri83, Dr Iron Arc and 2 others
Pluto

Pluto

Meowing to go out
Dec 27, 2020
4,104
  • Informative
  • Hugs
  • Love
Reactions: DarkRange55, Praestat_Mori and Dr Iron Arc
H

Hvergelmir

Experienced
May 5, 2024
258
So there is no digital brain or simulated brain in the human sense behind it.
Well, that's where experts disagree with one another. I guess it comes down to how you'd define "human sense".

A neural net can approximate any mathematical expression; that was proven long before "AI" was a big thing.
The implication of that is that a neural net can simulate the universe, or parts of it depending on the size of the net.

If it evolves an accurate model of something, instead of relying on pure word association it will score much better in evaluation (the training function).
Thus any accurate models that emerge, tend to stay and evolve.

For fun I threw the following query at it:
I have a graph: 2, 16, 4, B, a dog, 13, A, C
The letter all equals 20.
What is the average of the graph? What is the sum of all the points? Which is the single highest point?
It did state that "a dog" must be some kind of placeholder, doesn't have a numerical value, and will be treated as zero. It then proceeded to do the calculations correctly.
This is extremely hard to prove, but I think it would be unreasonably hard to answer this with word association alone.

It's far from perfect, but I think it performs much better than one would expect from pure linguism. Conversations are inherently related to reality. Thus, to be a good conversationalist it helps to understand reality. I think ChatGPT is evolving just that.
 
  • Like
Reactions: DarkRange55 and Praestat_Mori
DarkRange55

DarkRange55

I am Skynet
Oct 15, 2023
1,842
Well, that's where experts disagree with one another. I guess it comes down to how you'd define "human sense".

A neural net can approximate any mathematical expression; that was proven long before "AI" was a big thing.
The implication of that is that a neural net can simulate the universe, or parts of it depending on the size of the net.

If it evolves an accurate model of something, instead of relying on pure word association it will score much better in evaluation (the training function).
Thus any accurate models that emerge, tend to stay and evolve.

For fun I threw the following query at it:

It did state that "a dog" must be some kind of placeholder, doesn't have a numerical value, and will be treated as zero. It then proceeded to do the calculations correctly.
This is extremely hard to prove, but I think it would be unreasonably hard to answer this with word association alone.

It's far from perfect, but I think it performs much better than one would expect from pure linguism. Conversations are inherently related to reality. Thus, to be a good conversationalist it helps to understand reality. I think ChatGPT is evolving just that.
Relevant to the comparison of current AI and human cognition: https://techxplore.com/news/2024-11-cognitive-ai.html
 
  • Like
Reactions: Praestat_Mori
ShesPunishedForever

ShesPunishedForever

Punished
Sep 15, 2024
31
It's a difficult concept because we can't always apply human ways of understanding onto LLMs. My take is that I'll add that chatGPT doesn't interpret or "understand" things as words, it interprets inputs into units of tokens which can vary between symbols, single characters, and words, its based on a transformer model of neural network that gives it self-attention to contextualize words amongst other words and other ways of relating tokens. But it doesn't interpret things as concepts, it reads and relates the tokens its inputted with reads each token one after the other to try contextualize it, then based on its training and fine tuning it outputs in tokens too.
 
DarkRange55

DarkRange55

I am Skynet
Oct 15, 2023
1,842
Well, that's where experts disagree with one another. I guess it comes down to how you'd define "human sense".

A neural net can approximate any mathematical expression; that was proven long before "AI" was a big thing.
The implication of that is that a neural net can simulate the universe, or parts of it depending on the size of the net.

If it evolves an accurate model of something, instead of relying on pure word association it will score much better in evaluation (the training function).
Thus any accurate models that emerge, tend to stay and evolve.

For fun I threw the following query at it:

It did state that "a dog" must be some kind of placeholder, doesn't have a numerical value, and will be treated as zero. It then proceeded to do the calculations correctly.
This is extremely hard to prove, but I think it would be unreasonably hard to answer this with word association alone.

It's far from perfect, but I think it performs much better than one would expect from pure linguism. Conversations are inherently related to reality. Thus, to be a good conversationalist it helps to understand reality. I think ChatGPT is evolving just that.
I agree with your assessment. There is already starting to be something more than mere word association. More accurate internal models will improve answers (as it does for humans),
The other big improvement I expect is much higher learning efficiency. Current mainstream AIs require vast amounts of data compared to a human brain. An AI that learned as efficiently as humans and that had the speed of electronics and the scale to digest the whole internet's worth of data would be interesting to communicate with.
 
H

Hvergelmir

Experienced
May 5, 2024
258
The other big improvement I expect is much higher learning efficiency.
I hope your right!
There seem to be current plateau is slowly draining my hope, though.

In contrast to people, a new neural net is a true blank slate.
Maybe human learning is more akin to lora/loha training, while the initial checkpoint training covers large parts of what evolution did for us? That's just me speculating, trying to draw parallells to more familiar concepts, though.
 
DarkRange55

DarkRange55

I am Skynet
Oct 15, 2023
1,842
It's a difficult concept because we can't always apply human ways of understanding onto LLMs. My take is that I'll add that chatGPT doesn't interpret or "understand" things as words, it interprets inputs into units of tokens which can vary between symbols, single characters, and words, its based on a transformer model of neural network that gives it self-attention to contextualize words amongst other words and other ways of relating tokens. But it doesn't interpret things as concepts, it reads and relates the tokens its inputted with reads each token one after the other to try contextualize it, then based on its training and fine tuning it outputs in tokens too.
It's a difficult concept because we can't always apply human ways of understanding onto LLMs.
Humans do a much better job of understanding than current LLMs, but the start of the way that we reach that understanding is somewhat similar (at least in my case).

My take is that I'll add that chatGPT doesn't interpret or "understand" things as words, it interprets inputs into units of tokens which can vary between symbols, single characters, and words,
So do humans...

its based on a transformer model
Implementation detail

of neural network that gives it self-attention to contextualize words amongst other words and other ways of relating tokens.
So do humans...

But it doesn't interpret things as concepts, it reads and relates the tokens its inputted with reads each token one after the other to try contextualize it,
That's similar to what I do when starting to learn a new concept. For the first few times I read the name (often an acronym) of a new concept, I have to keep going back to a description (multip[le tokens that I am familiar with) until the new concept 'sticks' as a single entity.

then based on its training and fine tuning it outputs in tokens too.
So do humans...
 
UncertainA

UncertainA

Member
Jan 24, 2023
12
I actually really like this explanation. A lot better than my literal explanation lol.
 

Similar threads

derpyderpins
Replies
16
Views
635
Politics & Philosophy
avoid
avoid
sserafim
Replies
3
Views
473
Suicide Discussion
Twiceler
Twiceler
M
Replies
15
Views
2K
Recovery
Life_and_Death
Life_and_Death
N
Replies
2
Views
228
Offtopic
penguinl0v3s
penguinl0v3s
todiefor
Replies
20
Views
8K
Recovery
Rhizomorph1
Rhizomorph1