I enjoyed reading this post; thank you for writing it. LessWrong has an allergy to basically every category Marx is a member of—“armchair” philosophers, socialist theorists, pop humanities idols—in my view, all entirely unjustified.
I had no idea Marx’s forecast of utopia was explicitly based on extrapolating the gains from automation; I take your word for it somewhat, but from being passingly familiar with his work, I have a hunch you may be overselling his naivete.
Unfortunately, since the main psychological barrier to humans solving the technical alignment problem at present is not altruistic intentions, but raw cognitive intelligence, any meta-alignment scheme that proposes to succeed today has far more work cut out for it than just ensuring AGI-builders are accounting for risk to the best of their ability. It has to make the best of their ability good enough. That involves, at the very minimum, an intensive selection program for geniuses who are then placed in a carefully incentives-aligned research environment, and probably human intelligence enhancement.
I enjoyed reading this post; thank you for writing it. LessWrong has an allergy to basically every category Marx is a member of—“armchair” philosophers, socialist theorists, pop humanities idols—in my view, all entirely unjustified.
To be fair here, Marx was kind of way overoptimistic about what could be achieved with central economic planning in the 20th century, because it way overestimated how far machines/robots could go, and also this part where he says communist countries don’t need a plan because the natural laws would favor communism, which was bullshit.
[...] Marx was philosophically opposed, as a matter of principle, to any planning about the structure of communist governments or economies. He would come out and say it was irresponsible to talk about how communist governments and economies will work. He believed it was a scientific law, analogous to the laws of physics, that once capitalism was removed, a perfect communist government would form of its own accord. There might be some very light planning, a couple of discussions, but these would just be epiphenomena of the governing historical laws working themselves out.
I enjoyed reading this post; thank you for writing it. LessWrong has an allergy to basically every category Marx is a member of—“armchair” philosophers, socialist theorists, pop humanities idols—in my view, all entirely unjustified.
I had no idea Marx’s forecast of utopia was explicitly based on extrapolating the gains from automation; I take your word for it somewhat, but from being passingly familiar with his work, I have a hunch you may be overselling his naivete.
Unfortunately, since the main psychological barrier to humans solving the technical alignment problem at present is not altruistic intentions, but raw cognitive intelligence, any meta-alignment scheme that proposes to succeed today has far more work cut out for it than just ensuring AGI-builders are accounting for risk to the best of their ability. It has to make the best of their ability good enough. That involves, at the very minimum, an intensive selection program for geniuses who are then placed in a carefully incentives-aligned research environment, and probably human intelligence enhancement.
To be fair here, Marx was kind of way overoptimistic about what could be achieved with central economic planning in the 20th century, because it way overestimated how far machines/robots could go, and also this part where he says communist countries don’t need a plan because the natural laws would favor communism, which was bullshit.
More here: