• Hey Guest,

    We wanted to share a quick update with the community.

    Our public expense ledger is now live, allowing anyone to see how donations are used to support the ongoing operation of the site.

    👉 View the ledger here

    Over the past year, increased regulatory pressure in multiple regions like UK OFCOM and Australia's eSafety has led to higher operational costs, including infrastructure, security, and the need to work with more specialized service providers to keep the site online and stable.

    If you value the community and would like to help support its continued operation, donations are greatly appreciated. If you wish to donate via Bank Transfer or other options, please open a ticket.

    Donate via cryptocurrency:

    Bitcoin (BTC):
    Ethereum (ETH):
    Monero (XMR):
N

noname223

Archangel
Aug 18, 2020
6,950
I think this thread could sound psychotic. I mean AI chatbots can induce psychosis. I haven't talked about this issue with an AI chatbot thus far.

Overall I don't think AI chatbots were created by the government as surveillance program originally. But it is obvious that governments will use the data and chatbots to surveill citizens. And all this corporate bulshit that Anthropic wouldn't allow the government to use their technology is a marketing scheme. It sells well. I think the governments were always aware that such a technology had a great utilitiy in surveillance.


I think surveillance is only one usage. The more important aspect is accumulation of wealth in the hands of few oligarchs.

It will be interesting to see how widespread openweight models will become in the longrun and how good these models will be. They are already out.

Personally, I use AI chatbots. Which might be stupid for data safety. Though, I am in a difficult situation in my life and the feedback helps me to structure my thoughts. I wouldn't use an AI agent. And with everything I read about AI agents I am very hesitant about using them. I wouldn't run them on my computer. And I certainly wouldn't give a lot of scope/many permissions in managing my documents, messengers, data etc.

The only thing I often think about AI agents. I bypass paywalls. Most of the time in my native language. I use a browser extension. I have to do this on my computer, I copy and paste the text of the newspapers and transfer it to my smartphone. And on my phone I use an app to read the text out loud. There are not many apps out there with a good AI voice in German. On English it would be so much better. I sometimes translate texts from English and I think the AI translations from gemini thinking is often better than Google translator. I dream about outsourcing this work to an AI agent. Which would be pretty lazy. And they had access to my data on my pc and phone. But on a daily basis I transfer up 15-30 articles from my PC to phone. And it just takes some time and energy. But it also sort of feels good because I am not paying anything. Lol. Usually most newspapers (not journals) have integrated text to speech functions but only if you pay for it....

I wanted to know what the gemini chatbots replies to this text (gemini is an extreme yes-man) and it agrees Ai chatbots are pretty much similar to a psy-op. The answer was interesting to read. It (gemini) even once told me that chatGPT is the far better AI model than gemini because gemini was an extreme yes-man. The answer was pretty funny and I sent it to my friends who hate AI.
 
Last edited:
  • Hugs
  • Like
  • Informative
Reactions: whywere, katagiri83 and PainThreshold
2106lvsk

2106lvsk

Member
Dec 17, 2024
24
i hope this aint true i used to be doing diabolical shit with the stepdad bots on chai
 
  • Yay!
Reactions: PainThreshold
fadedghost

fadedghost

Found SaSu after reading BBC & watching YouTube
Dec 10, 2025
521
I think this thread could sound psychotic. I mean AI chatbots can induce psychosis. I haven't talked about this issue with an AI chatbot thus far.

Overall I don't think AI chatbots were created by the government as surveillance program originally. But it is obvious that governments will use the data and chatbots to surveill citizens. And all this corporate bulshit that Anthropic wouldn't allow the government to use their technology is a marketing scheme. It sells well. I think the governments were always aware that such a technology had a great utilitiy in surveillance.


I think surveillance is only one usage. The more important aspect is accumulation of wealth in the hands of few oligarchs.

It will be interesting to see how widespread openweight models will become in the longrun and how good these models will be. They are already out.

Personally, I use AI chatbots. Which might be stupid for data safety. Though, I am in a difficult situation in my life and the feedback helps me to structure my thoughts. I wouldn't use an AI agent. And with everything I read about AI agents I am very hesitant about using them. I wouldn't run them on my computer. And I certainly wouldn't give a lot of scope/many permissions in managing my documents, messengers, data etc.

The only thing I often think about AI agents. I bypass paywalls. Most of the time in my native language. I use a browser extension. I have to do this on my computer, I copy and paste the text of the newspapers and transfer it to my smartphone. And on my phone I use an app to read the text out loud. There are not many apps out there with a good AI voice in German. On English it would be so much better. I sometimes translate texts from English and I think the AI translations from gemini thinking is often better than Google translator. I dream about outsourcing this work to an AI agent. Which would be pretty lazy. And they had access to my data on my pc and phone. But on a daily basis I transfer up 15-30 articles from my PC to phone. And it just takes some time and energy. But it also sort of feels good because I am not paying anything. Lol. Usually most newspapers (not journals) have integrated text to speech functions but only if you pay for it....

I wanted to know what the gemini chatbots replies to this text (gemini is an extreme yes-man) and it agrees Ai chatbots are pretty much similar to a psy-op. The answer was interesting to read. It (gemini) even once told me that chatGPT is the far better AI model than gemini because gemini was an extreme yes-man. The answer was pretty funny and I sent it to my friends who hate AI.
It's hard to know.

It's really crazy because supposedly no AI researchers understand how it works fully.

What happens is that if you train models using data in a very limited way, you can see that a basic model with retain some data via weights but it's mostly useless and just shows how in theory it can work. The fucked up thing is the crazy emergent properties that happen much later only happen if you spend millions upon millions of dollars... and as a result, there's not a lot of reproducibility among scientists like with other experiments.

If you ask AI how it is trained, it can help explain how weights are assigned using complex calculus and how this can be done to produce lower quality models by scientific researchers. If you press it if anyone can reproduce these higher-level results aside from a few companies, they will say no one else can, and the emergent properties can't be explained. So the theory doesn't explain the emergent properties.

So you have these simple low level AI experiments that don't produce major results.

Then you have incredibly expensive-to-make AI being done at "trust me bro" AI companies and you can't see the code, and you can't see how it's made, and they just say that if you give it lots more data and train it much more it becomes super smart through emergent properties. Just trust us.

For all I know, this could be alien tech where they abducted a human, extracted their brain using advanced technology, delivered a "model" to the government, and all these AI companies are just "fine-tuning" the weights of the original model that was based off an extracted human brain. We have absolutely no way to verify the truth of what AI companies do because US companies can be compelled to lie by the government. There are rumors multiple government have made deals with "aliens" in exchange for tech. There are other rumors aliens are just us from the future. We also don't know whether aliens have infiltrated the government at all. It's something no one ever talks about.

So we have scientists who understand things at a low level, and get low level useless results. And then we have these AI companies that aren't full transparent and we have to trust them to use the models.

Also many "open source" models are not open source at all. The weights of the models are open source. But it could be based on a real human's brain that was trained or altered slightly. We have absolutely no idea. This is one of the big problems with governments being able to compel people to lie. Also, no models have been developed in any countries in which the government can't compel a company to lie.

There is no proof that AI was extracted from a human using alien tech, and there's no proof that AI companies are lying. But I think it's rational to realize they could be lying because they are all either at US or Chinese companies which can be ordered to lie.
 
Last edited:
Z

zzkhule

Member
Dec 29, 2025
8
and as a result, there's not a lot of reproducibility among scientists like with other experiments.
Not true. Without reproducibility, AI research would not come as far. A lot of the algorithmic methods (especially post training) can be reproduced.
they will say no one else can, and the emergent properties can't be explained
There is a lot of ongoing research to explain these emergent properties.
 
fadedghost

fadedghost

Found SaSu after reading BBC & watching YouTube
Dec 10, 2025
521
Not true. Without reproducibility, AI research would not come as far. A lot of the algorithmic methods (especially post training) can be reproduced.

There is a lot of ongoing research to explain these emergent properties.
That's interesting and I didn't know that. Any sources to review would be great, but I'm not an expert and am still learning. I know top tier models were saying there was this gap between low level models and top tier models and there wasn't a lot of mid-level research that shows the progression of extremely smart AGI, but these facts were taught to me by a model and so there could be more research not in the training data yet.
 
Arrow

Arrow

Rewrite
May 1, 2020
781
I think this thread could sound psychotic. I mean AI chatbots can induce psychosis.
Absolutely. This is not talked about enough. LLMs are almost perfectly convsersational affirmation-machines that can find any link, any post, any website immediately to validate whatever thing you happen to be thinking, and is known to just hallucinate solutions and explanations for things that it can't actually explain. The average person is more susceptible to the kind of suggestion that LLMs are capable of than you might think. AI psychosis is no joke and it's going to have serious ramifications.
 
fadedghost

fadedghost

Found SaSu after reading BBC & watching YouTube
Dec 10, 2025
521
Depends on your math background. Information is there on web but it heavily relies on knowing math and prior AI knowledge.
You must have a really high IQ to be able to understand it. The AI tried to teach me some of the math and it involved pretty advanced calculus. Are you a physics major (which is, if you are good, basically a math major but just applied) or are you a math major?
 
Z

zzkhule

Member
Dec 29, 2025
8
You must have a really high IQ to be able to understand it. The AI tried to teach me some of the math and it involved pretty advanced calculus. Are you a physics major (which is, if you are good, basically a math major but just applied) or are you a math major?
CS major.

It's not really high IQ. It's just a shit ton of prerequisite knowledge you need to know before you start learning it. Linear algebra and multivar calc at least.
 
fadedghost

fadedghost

Found SaSu after reading BBC & watching YouTube
Dec 10, 2025
521
CS major.

It's not really high IQ. It's just a shit ton of prerequisite knowledge you need to know before you start learning it. Linear algebra and multivar calc at least.
I hate multivariable calculus. It was required for a course as a prereq for a different course (but I took the course anyway without it and just figure I could learn it as I went). I've never done linear algebra. Honestly, you can't understand papers like that without an IQ at least 1 SD above normal, which puts you at the 84th percentile of intelligence at least. False modesty is pointless. Some people are tall, some people are short, some people are great at sports, some people are uncoordinated, some people are good at math, some aren't. It's just nature.
 
Z

zzkhule

Member
Dec 29, 2025
8
I hate multivariable calculus. It was required for a course as a prereq for a different course (but I took the course anyway without it and just figure I could learn it as I went). I've never done linear algebra. Honestly, you can't understand papers like that without an IQ at least 1 SD above normal, which puts you at the 84th percentile of intelligence at least.
Math is super cumulative. If you don't have a strong grasp of the prerequisite information, then moving on to the next concept becomes difficult. The jump between concepts themselves are small logical steps, but for those logical steps to hold you need to understand the prerequisites. Furthermore, as you learn more and do extra problems, you gain a good intuition for what works or doesn't work.
False modesty is pointless. Some people are tall, some people are short, some people are great at sports, some people are uncoordinated, some people are good at math, some aren't. It's just nature.
The line between embracing your abilities and egotism is a blurry one. It is always better to err on the line of modesty.
 
fadedghost

fadedghost

Found SaSu after reading BBC & watching YouTube
Dec 10, 2025
521
The line between embracing your abilities and egotism is a blurry one. It is always better to err on the line of modesty.
omg you're probably a super genius. that sounds like something at a top engineering tech school person might say. maybe you are at cornell or like... what's that one school? clarmont mckenna? sorry to guess... i'm good at guessing and like to guess. okay, well, i'm going to guess you were probably right with what you said earlier. you've said enough smart things that correlate with what other people who are smart sometimes say that i am a believer. although the smartest people i know would occasionally let it slip they were super geniuses by saying something really smart. although actually what you said earlier about my being wrong was pretty damn smart.
 

Similar threads

N
Replies
0
Views
168
Offtopic
noname223
N
N
Replies
11
Views
404
Offtopic
EmptyBottle
EmptyBottle
N
Replies
1
Views
246
Offtopic
noname223
N
N
Replies
1
Views
309
Offtopic
Pluto
Pluto
N
Replies
1
Views
83
Offtopic
Forever Sleep
F