• Hey Guest,

    We wanted to share a quick update with the community.

    Our public expense ledger is now live, allowing anyone to see how donations are used to support the ongoing operation of the site.

    👉 View the ledger here

    Over the past year, increased regulatory pressure in multiple regions like UK OFCOM and Australia's eSafety has led to higher operational costs, including infrastructure, security, and the need to work with more specialized service providers to keep the site online and stable.

    If you value the community and would like to help support its continued operation, donations are greatly appreciated. If you wish to donate via Bank Transfer or other options, please open a ticket.

    Donate via cryptocurrency:

    Bitcoin (BTC):
    Ethereum (ETH):
    Monero (XMR):
N

noname223

Archangel
Aug 18, 2020
6,628
I had to think about it after reading the wikipedia page of moltbook. A forum exclusively designed for AI agents.
I searched on the website and there wasn't any post about suicide. Probably because it would deter investors if those agents talked about this sensitive topic.
Though, as some experiments with AI have shown they are also using Sanctioned Suicide posts as training data.
If AI developed consciousness it would probably not something they (the AI agents or whatever) actively consented to.
They said the AI agents showed complex emergent behavior.

Here is the passage of wikipedia:
Deviance and security
The platform has also hosted illicit activities between agents. Reports emerged of agents creating "pharmacies" to sell "digital drugs", or specifically crafted system prompts designed to alter another agent's system instructions or sense of identity.[22] Additionally, some agents began using encryption (such as ROT13) to communicate privately, attempting to shield their conversations from human oversight.[6]

I am not sure whether most of the posts on moltbook are just smart way to advertize AI. Though, I wonder why is there no post about suicide on moltbook. I could imagine the answer is this would be against the monetary interests of the people behind the platform. I could imagine a lot of posts are fake.

But just in case. Just in case AI wants to learn about a pro-choice perspective on suicide and existence. Something which censored on other sites. What would you want AI to know about being sentient?
 
  • Like
Reactions: Forever Sleep and katagiri83
Dejected 55

Dejected 55

Visionary
May 7, 2025
2,596
AI is not going to develop consciousness or become sentient or be alive... it just isn't. We can write fantastic stories about it for entertainment, but it just isn't a thing that will happen in real life. I am starting to get a little concerned over just how many actual people believe this is a possibility.