A medieval peasant would very much disagree with that sentence, if they were suddenly thrust into a modern grocery store. I think they would say the physical reality around them changed to a pretty magical-seeming degree.
They would still understand the concept of paying money for food. The grocery store is pretty amazing but it’s fundamentally the same transaction as the village market. I think the burden of proof is on people claiming that money will be ‘done away with’ because ‘post-scarcity’, when there will always be economic scarcity. It might take an hour of explanation and emotional adjustment for a time-displaced peasant to understand the gist of the store, but it’s part of a clear incremental evolution of stores over time.
They think a friendly-AGI-run society would have some use for money, conflict, etc. I’d say the onus is on them to explain why we would need those things in such a society.
I think a basically friendly society is one that exists at all and is reasonably okay (at least somewhat clearly better) compared to the current one. I don’t see why economic transactions, conflicts of all sorts, etc wouldn’t still happen, assuming the lack of existentially-destructive ones that would preclude the existence of such a hypothetical society. I can see the nature of money changing, but not the fundamentals of there being trades.
I don’t think AI can just decide to do away with conflicts via unilateral fiat without an enormous amount of multipolar effort, in what I would consider a friendly society not ran by a world dictator. Like, I predict it would be quite likely terrible to have an ASI with such disproportionate power that it is able to do that, given it could/would be co-opted by power-seekers.
I also think that trying to change things too fast or ‘do away with problems’ is itself something trending along the spectrum of unfriendliness from the perspective of a lot of humans. I don’t think the Poof Into Utopia After FOOM model makes sense, that you have one shot to send a singleton rocket into gravity with the right values or forever hold your peace. This thing itself would be an unfriendly agent to have such totalizing power and make things go Poof without clear democratic deliberation and consent. This seems like one of the planks of SIAI ideology that seems clearly wrong to me, now, though not indubitably so. There seems to be a desire to make everything right and obtain unlimited power to do so, and this seems intolerant of a diversity of values.
This seems to be a combo of the absurdity heuristic and trying to “psychoanalyze your way to the truth”. Just because something sounds kind of like some elements of some religions, does not make it automatically false.
I am perfectly happy to point out the ways people around here obviously use Singularitarianism as a (semi-)religion, sometimes, as part of the functional purpose of the memetic package. Not allowing such social observations would be epistemically distortive. I am not saying it isn’t also other things, nor am I saying it’s bad to have religion, except that problems tend to arise. I think I am in this thread, on these questions, coming with more of a Hansonian/outside view perspective than the AI zookeeper/nanny/fully automated luxury gay space communism one.
How do we do this without falling into the Crab Bucket problem AKA Heckler’s Veto, which is definitely a thing that exists and is exacerbated by these concerns in EA-land? “Don’t do risky things” equivocates into “don’t do things”.