• Hey Guest,

    We wanted to share a quick update with the community.

    Our public expense ledger is now live, allowing anyone to see how donations are used to support the ongoing operation of the site.

    👉 View the ledger here

    Over the past year, increased regulatory pressure in multiple regions like UK OFCOM and Australia's eSafety has led to higher operational costs, including infrastructure, security, and the need to work with more specialized service providers to keep the site online and stable.

    If you value the community and would like to help support its continued operation, donations are greatly appreciated. If you wish to donate via Bank Transfer or other options, please open a ticket.

    Donate via cryptocurrency:

    Bitcoin (BTC):
    Ethereum (ETH):
    Monero (XMR):
artificialpasta

artificialpasta

Experienced
Feb 2, 2020
218
AGI can no longer be contained to science fiction. We are on a fork of two futures - one that leads to yet another AI winter, and another, increasingly more probable, that gives us AGI. From there, superintelligence becomes a little bit more than a plausibility. Of course, if you are entirely unmoved by theses made by people like Aschenbrenner that claim a major development well within our lifetimes, this might not be for you.

The possibilities then are endless. They may be horrifying, but they may also be cause for hope. Intelligence is a bottleneck for curing not just physical diseases like cancer but also mental and social conditions like depression and loneliness. Psychiatry has medical and scientific foundation but in practice a lot of it is done via intuition, which leaves a lot of room for error and disappointment. An aligned superintelligence would, for example, be able to tune a brain that is overly sensitive to loneliness in such a way that still as much as possible preserves the components that are associated with a self-belief in agency.

Of course, this is a variant of the "what if things get better?" line that no doubt many of you are tired of, as am I, but I find it interesting to consider.
 
  • Like
Reactions: Forever Sleep
KillingPain267

KillingPain267

Visionary
Apr 15, 2024
2,086
No, I've seen it all, thought of it all. Nothing can surprise me anymore. I'm not even curious enough to stay here and find out what the future will bring. I think we should phase out humanity.
 
  • Like
Reactions: Hollowman and Forever Sleep
GlassMoon

GlassMoon

😶‍🌫️
Nov 18, 2024
389
I'm afraid those AGIs will be conrolled by very few companies and all your interactions with them might be logged and evaluated. I hope it will be different, though. I really hope they'll make robots that free us from daily chores. That alone will make life more livable. But what is going to happen to my job? That's the part I'm afraid of.

I do hope to get an AGI as a companion with whom I can share every aspect of my life without judgement. That would be really cool.
 
  • Like
Reactions: whitetaildeer and artificialpasta
[NoName]

[NoName]

Student
Nov 15, 2018
151
The future makes me sad.
 
  • Like
Reactions: itsgone2
TransilvanianHunger

TransilvanianHunger

Grave with a view...
Jan 22, 2023
419
The possibilities then are endless. They may be horrifying, but they may also be cause for hope.
I am firmly in the camp of true AGI being a pipe dream, but even a decent approximation is likely to just make things worse. Not because of rogue super intelligent computers might decide to rearrange our atoms, but simply because the people who control these tools are absolute garbage. Any future where they have even more power than they already do is a bleak fucking future, for sure.

Intelligence is a bottleneck for curing [...] conditions like depression and loneliness.
Yeah, no.
An aligned superintelligence would, for example, be able to tune a brain that is overly sensitive to loneliness in such a way that still as much as possible preserves the components that are associated with a self-belief in agency.
Not happening. Some mental illnesses have biological causes, but you cannot "cure" depression and loneliness by "tuning the brain". These are fundamentally human issues, that require human connection, human action, human intervention, to change. Unless by "cure" you mean "chemically lobotomise a person so they no longer care about their circumstances". That's definitely doable. Then the super intelligence can generate an artificial happy life that can be beamed straight to their brain.

What a horrible future to look forward to :)
 
  • Like
Reactions: whitetaildeer
O

oneeyed

Arcanist
Oct 11, 2022
440
We need to get rid of the Elon Musks, Mark Zuckerbergs, and countless other evil richest of the rich. A handful of people control the majority of the information people consume and something like 5 companies own over 80% of world's food supply. This consolidation of wealth and power will also apply to AI and it won't be good for anyone.
 
yxmux

yxmux

👁️‍🗨️
Apr 16, 2024
184
Yes, of course. I'm quite cynical and pessimistic, but I find that forfeiting to fatalism is forfeiting my curiosity and intellect. I feel that attaching this kind of emotion to the future severely limits my intellectual scope.
 
  • Love
Reactions: artificialpasta
M

maylurker

Experienced
Dec 28, 2025
281
AGI can no longer be contained to science fiction. We are on a fork of two futures - one that leads to yet another AI winter, and another, increasingly more probable, that gives us AGI. From there, superintelligence becomes a little bit more than a plausibility. Of course, if you are entirely unmoved by theses made by people like Aschenbrenner that claim a major development well within our lifetimes, this might not be for you.

The possibilities then are endless. They may be horrifying, but they may also be cause for hope. Intelligence is a bottleneck for curing not just physical diseases like cancer but also mental and social conditions like depression and loneliness. Psychiatry has medical and scientific foundation but in practice a lot of it is done via intuition, which leaves a lot of room for error and disappointment. An aligned superintelligence would, for example, be able to tune a brain that is overly sensitive to loneliness in such a way that still as much as possible preserves the components that are associated with a self-belief in agency.

Of course, this is a variant of the "what if things get better?" line that no doubt many of you are tired of, as am I, but I find it interesting to consider.
yes im excited to see humanoid robots that would do all the boring stuff for me
 
artificialpasta

artificialpasta

Experienced
Feb 2, 2020
218
yes im excited to see humanoid robots that would do all the boring stuff for me
lol yes funny (but when you think about it and consider how language works vs physics, not surprising) how we got to automating art and language before laundry
 
  • Like
Reactions: maylurker

Similar threads

DarkRange55
Replies
1
Views
499
Politics & Philosophy
Catscratch
C
DarkRange55
Replies
6
Views
1K
Offtopic
DarkRange55
DarkRange55
DarkRange55
Replies
0
Views
974
Offtopic
DarkRange55
DarkRange55
AnderDethsky
Replies
4
Views
2K
Suicide Discussion
HouseofMortok
HouseofMortok