They think a good “singularity” would not be particularly “weird” or sci-fi looking, which ignores the evidence of technological development throughout history. I think this is what the “The specific, real reality in front of you” sentence is about. A medieval peasant would very much disagree with that sentence, if they were suddenly thrust into a modern grocery store. I think they would say the physical reality around them changed to a pretty magical-seeming degree.
Any/all of the above, but applied to a harmful singularity. (E.g. thinking that an unfriendly AGI could not kill everyone, rendering their previous problems irrelevant.)
This seems to be a combo of the absurdity heuristic and trying to “psychoanalyze your way to the truth”. Just because something sounds kind of like some elements of some religions, does not make it automatically false.
(I’d be less antsy about this if this was a layperson’s comment in some reddit thread, but this is a LessWrong comment on an AI alignment researcher’s post. I did not to see this sort of thing in this place at this time.)
A medieval peasant would very much disagree with that sentence, if they were suddenly thrust into a modern grocery store. I think they would say the physical reality around them changed to a pretty magical-seeming degree.
They would still understand the concept of paying money for food. The grocery store is pretty amazing but it’s fundamentally the same transaction as the village market. I think the burden of proof is on people claiming that money will be ‘done away with’ because ‘post-scarcity’, when there will always be economic scarcity. It might take an hour of explanation and emotional adjustment for a time-displaced peasant to understand the gist of the store, but it’s part of a clear incremental evolution of stores over time.
I think a basically friendly society is one that exists at all and is reasonably okay (at least somewhat clearly better) compared to the current one. I don’t see why economic transactions, conflicts of all sorts, etc wouldn’t still happen, assuming the lack of existentially-destructive ones that would preclude the existence of such a hypothetical society. I can see the nature of money changing, but not the fundamentals of there being trades.
I don’t think AI can just decide to do away with conflicts via unilateral fiat without an enormous amount of multipolar effort, in what I would consider a friendly society not ran by a world dictator. Like, I predict it would be quite likely terrible to have an ASI with such disproportionate power that it is able to do that, given it could/would be co-opted by power-seekers.
I also think that trying to change things too fast or ‘do away with problems’ is itself something trending along the spectrum of unfriendliness from the perspective of a lot of humans. I don’t think the Poof Into Utopia After FOOM model makes sense, that you have one shot to send a singleton rocket into gravity with the right values or forever hold your peace. This thing itself would be an unfriendly agent to have such totalizing power and make things go Poof without clear democratic deliberation and consent. This seems like one of the planks of SIAI ideology that seems clearly wrong to me, now, though not indubitably so. There seems to be a desire to make everything right and obtain unlimited power to do so, and this seems intolerant of a diversity of values.
This seems to be a combo of the absurdity heuristic and trying to “psychoanalyze your way to the truth”. Just because something sounds kind of like some elements of some religions, does not make it automatically false.
I am perfectly happy to point out the ways people around here obviously use Singularitarianism as a (semi-)religion, sometimes, as part of the functional purpose of the memetic package. Not allowing such social observations would be epistemically distortive. I am not saying it isn’t also other things, nor am I saying it’s bad to have religion, except that problems tend to arise. I think I am in this thread, on these questions, coming with more of a Hansonian/outside view perspective than the AI zookeeper/nanny/fully automated luxury gay space communism one.
They think a friendly-AGI-run society would have some use for money, conflict, etc. I’d say the onus is on them to explain why we would need those things in such a society.
Because of the laws of thermodynamics holding, basically. I do buy that a lot of stuff could switch over to non-money modes, but if we assume that the basic laws of physics fundamentally still hold true, then this can’t work, and this is one of those areas where you need to give hard evidence.
Much more generally, the Industrial Revolution is a good example, in that it really did improve the lives of humans massively, even with imperfect distribution of benefits, but it didn’t end conflict or money, and I’d argue there was a use for money (Although the Industrial Revolution did drastically reduce the benefits of war to non-ideological actors.)
They think a good “singularity” would not be particularly “weird” or sci-fi looking, which ignores the evidence of technological development throughout history. I think this is what the “The specific, real reality in front of you” sentence is about.
Interestingly enough, while I think this is true over the long-term, and potentially even over the short term, I think a major problem is LWers tend to underestimate how long things take to change, and in general have a bit of a bad habit of assuming everything changing at maximum speed.
A medieval peasant would very much disagree with that sentence, if they were suddenly thrust into a modern grocery store. I think they would say the physical reality around them changed to a pretty magical-seeming degree.
I agree that the medieval peasant would be very surprised at how much things changed, but they’d also detect a lot of continuity, and would have a lot of commonalities, especially on the human side of things.
This seems to be a combo of the absurdity heuristic and trying to “psychoanalyze your way to the truth”. Just because something sounds kind of like some elements of some religions, does not make it automatically false.
But it does decrease the credence, potentially substantially, and that could be important.
Now, my general view is that I do think there’s reason to believe that AI could be the greatest technology in history, but I agree with the OP that there’s a little magic often involved, and it’s a little bit of a red flag how much AI gets compared to gods.
And contra some people, I do think the psychoanalyze your way to the truth is more useful than people think, especially if you have good a priori reason to expect biases to drive the discussion, because they can allow you to detect red flags.
I mean… yeah?
Some things I think would cause people to disagree:
They think a friendly-AGI-run society would have some use for money, conflict, etc. I’d say the onus is on them to explain why we would need those things in such a society.
They think a good “singularity” would not be particularly “weird” or sci-fi looking, which ignores the evidence of technological development throughout history. I think this is what the “The specific, real reality in front of you” sentence is about. A medieval peasant would very much disagree with that sentence, if they were suddenly thrust into a modern grocery store. I think they would say the physical reality around them changed to a pretty magical-seeming degree.
Any/all of the above, but applied to a harmful singularity. (E.g. thinking that an unfriendly AGI could not kill everyone, rendering their previous problems irrelevant.)
This seems to be a combo of the absurdity heuristic and trying to “psychoanalyze your way to the truth”. Just because something sounds kind of like some elements of some religions, does not make it automatically false.
(I’d be less antsy about this if this was a layperson’s comment in some reddit thread, but this is a LessWrong comment on an AI alignment researcher’s post. I did not to see this sort of thing in this place at this time.)
They would still understand the concept of paying money for food. The grocery store is pretty amazing but it’s fundamentally the same transaction as the village market. I think the burden of proof is on people claiming that money will be ‘done away with’ because ‘post-scarcity’, when there will always be economic scarcity. It might take an hour of explanation and emotional adjustment for a time-displaced peasant to understand the gist of the store, but it’s part of a clear incremental evolution of stores over time.
I think a basically friendly society is one that exists at all and is reasonably okay (at least somewhat clearly better) compared to the current one. I don’t see why economic transactions, conflicts of all sorts, etc wouldn’t still happen, assuming the lack of existentially-destructive ones that would preclude the existence of such a hypothetical society. I can see the nature of money changing, but not the fundamentals of there being trades.
I don’t think AI can just decide to do away with conflicts via unilateral fiat without an enormous amount of multipolar effort, in what I would consider a friendly society not ran by a world dictator. Like, I predict it would be quite likely terrible to have an ASI with such disproportionate power that it is able to do that, given it could/would be co-opted by power-seekers.
I also think that trying to change things too fast or ‘do away with problems’ is itself something trending along the spectrum of unfriendliness from the perspective of a lot of humans. I don’t think the Poof Into Utopia After FOOM model makes sense, that you have one shot to send a singleton rocket into gravity with the right values or forever hold your peace. This thing itself would be an unfriendly agent to have such totalizing power and make things go Poof without clear democratic deliberation and consent. This seems like one of the planks of SIAI ideology that seems clearly wrong to me, now, though not indubitably so. There seems to be a desire to make everything right and obtain unlimited power to do so, and this seems intolerant of a diversity of values.
I am perfectly happy to point out the ways people around here obviously use Singularitarianism as a (semi-)religion, sometimes, as part of the functional purpose of the memetic package. Not allowing such social observations would be epistemically distortive. I am not saying it isn’t also other things, nor am I saying it’s bad to have religion, except that problems tend to arise. I think I am in this thread, on these questions, coming with more of a Hansonian/outside view perspective than the AI zookeeper/nanny/fully automated luxury gay space communism one.
Because of the laws of thermodynamics holding, basically. I do buy that a lot of stuff could switch over to non-money modes, but if we assume that the basic laws of physics fundamentally still hold true, then this can’t work, and this is one of those areas where you need to give hard evidence.
Much more generally, the Industrial Revolution is a good example, in that it really did improve the lives of humans massively, even with imperfect distribution of benefits, but it didn’t end conflict or money, and I’d argue there was a use for money (Although the Industrial Revolution did drastically reduce the benefits of war to non-ideological actors.)
Interestingly enough, while I think this is true over the long-term, and potentially even over the short term, I think a major problem is LWers tend to underestimate how long things take to change, and in general have a bit of a bad habit of assuming everything changing at maximum speed.
I agree that the medieval peasant would be very surprised at how much things changed, but they’d also detect a lot of continuity, and would have a lot of commonalities, especially on the human side of things.
But it does decrease the credence, potentially substantially, and that could be important.
Now, my general view is that I do think there’s reason to believe that AI could be the greatest technology in history, but I agree with the OP that there’s a little magic often involved, and it’s a little bit of a red flag how much AI gets compared to gods.
And contra some people, I do think the psychoanalyze your way to the truth is more useful than people think, especially if you have good a priori reason to expect biases to drive the discussion, because they can allow you to detect red flags.