I think consciousness and self awareness are major problems. The forced pro-mortalist perspective of many efilists I've encountered dissuades me from efilism, though. Ideologically, efilism supports benign species-wide exit stratagies. But humans just aren't (majority action) benign animals. I don't have faith in us to carry out any wide-scale interventions benignly. What is your particular efilist perspective?
I see even non sentient life as being unacceptable, given there's the risk of it developing sentience (the very creator of life's problems, essentially a torture mechanism that fills it with endless problems to solve, endless needs and desires to be fulfilled, pain and reward mechanisms that urges life to risk its wellbeing in pursuit of such) Life simply serves no function in the universe - it fixes nothing except problems it causes itself.
I support planetary pro mortalism
in theory. The chances of it wiping out all life, doing so relatively painlessly, and preventing abiogenesis afterward are the gigantic issues (also whether we should gamble on trying to travel the stars to wipe out other life if intergalactic travel is even possible). I see the suffering loaded impositions that life keeps forcing as being too egregious to be allowed to be carried out, and the perpetuation of such to strive towards creating a "utopic" state as not being worth the cost. (There can be no compensation for the hundreds of billions of feeling organisms that would be forced into existence to suffer along the way). An ideal red button scenario doesn't seem plausible (or even if there was, it's doubtful whether it would even be considered by civilization) so I guess I'm willing to settle for David Pearce's style of negative utilitarian transhumanism. But I'm wary of the potential horrors that greater technological power could unleash since if we can greater manipulate the dial of wellbeing to drastically reduce suffering, the ability to amplify suffering would be there as well. (Ex: Humanity being digitally uploaded into a virtual reality, with the potential for the simulation of unparalleled sensations, both good and bad. Security would be of the utmost importance, can it be guaranteed for however long humanity survives? Just think of how buggy our mobile devices are, or the potential for a rogue force to emerge to want to cause havoc - potentially out of sheer boredom)
But then there's the fundamental question of existence (Why are we heeeere) that I'd really want to be answered before pulling the plug on life, whether we should gamble on continuing civilization to find out, or if we could even do anything if we found out the answers (say if the universe is cyclical and we're doomed to have to suffer for eternity)
Meh. I'm a fence sitter of cosmic proportions. I'm just hoping a benevolent AI god emerges from the tech singularity and solves everything.
Inb4 ban