DarkRange55
I am Skynet
- Oct 15, 2023
- 1,842
This is the most simple explanation I can summarize:
GPT Generative Pre-trained Transformers, are a type of Large Language Model (LLM), which is a subset of artificial intelligence. They're layers of outputs and inputs so you have some data that comes in that gets processed mathematically and that produces an output. The output then becomes the input for another layer which does its own processing, etc. You can literally have billions of parameters. Then it kind of works its way up mathematically and gives you some kind of output which can be a probability or it could be binary or give you a range. Thats a quick overview of how it works.
For the record: A simple sensor would not be AI.
In theory computers can achieve any kind of thinking that we do.
Even traditional programming (as opposed to teaching an AI) can perform abductive logic (my mentor wrote a crude such program back in the early 1970s).
GPT Generative Pre-trained Transformers, are a type of Large Language Model (LLM), which is a subset of artificial intelligence. They're layers of outputs and inputs so you have some data that comes in that gets processed mathematically and that produces an output. The output then becomes the input for another layer which does its own processing, etc. You can literally have billions of parameters. Then it kind of works its way up mathematically and gives you some kind of output which can be a probability or it could be binary or give you a range. Thats a quick overview of how it works.
For the record: A simple sensor would not be AI.
In theory computers can achieve any kind of thinking that we do.
Even traditional programming (as opposed to teaching an AI) can perform abductive logic (my mentor wrote a crude such program back in the early 1970s).