A friend of mine had this to say on discord when I shared this link. The opinions I paste here are very much not my own, but I thought it would be useful to share, so I asked for permission to share it, and here it is. Perhaps it will be useful in some way—I have the hunch that it’s at least useful to know this is someone’s opinion.
So what I find amusing about this is that the position of “Humanity isn’t special, and maximizing consciousness is good even if it doesn’t do anything special to preserve humanity” is actually something that utilitarian EAs kind of have to accept, and when I first read the e/acc manifesto I thought that laying that bare was a major point of the original manifesto. The contradiction between being an effective utilitarian but also trying to engage in some kind of eternal human preservation project
After all if you seriously engage with the whole ‘rats on opiates’ and ‘shrimp utility maximization’ means that you have accepted both:
no anthropocentrism
utility maximization is the ultimate moral good
But also “no allegiance to the biological substrate” also seems like something that comes right out of the rationalist memeplex, since there are so many upload fans who think you can, in fact, get a regular old brain working in silico with no losses whatsoever
What I find horrifically disingenuous though, is the smears coming from certain people that these propositions mean someone is ok with kill everyone murderous AI simply because they’ve accepted the prior statements. You can find beff himself saying he doesn’t think this would happen because a world with less consciousness in it would have lower value than one with more in it, and from the prior assertions this seems obviously true
The primary difference I see between the two is whether or not one believes in the inherently aligning behavior of markets. If you believe this (and the e/acc manifesto says a LOT about embracing technocapital and markets), then you fundamentally do not believe that acceleration will cause AI human-extinction murder, because it would be a loss from an econ, markets, and general value standpoint. e/acc is as such an ‘alignment by default’ philosophy
Now of course, it also doesn’t mean any particular allegiance to biology, as has been said so often
so if humanity for whatever reason decided to say, stop procreating at a replenishment rate, or an asteroid hit that took all humans out, from the e/acc perspective AI getting to survive is a good thing since some highly conscious life surviving is better than none
And I don’t actually see that being all that different from classic rationalist philosophy tbh, minus the alignment-by-default belief
As someone I know once said “The e/acc vs EA-safetyist spat seems like transhumanist infighting”
either way you’ve never seen me identify as either EA or e/acc because despite believing in the proactionary principle and not the precautionary principle and believing the law of accelerating returns is a good thing and in ‘alignment by default’, because:
I am not sure of the role substrate plays. I actually think substrate might matter quite a bit and you’ll never see me uploading without being able to test synthetic neurons personally in a reversible migration fashion
I don’t buy utilitarianism or any other kind of min/max or sum/max as a first-principle moral belief. It can be a helpful guide sometimes but it’s not the prime directive. Kardashev maxxing is cool but I don’t see it as the greatest good
A friend of mine had this to say on discord when I shared this link. The opinions I paste here are very much not my own, but I thought it would be useful to share, so I asked for permission to share it, and here it is. Perhaps it will be useful in some way—I have the hunch that it’s at least useful to know this is someone’s opinion.