The cover art. It looks like something e/acc would use as their cover. Rapid progress is their vision. Flying vehicles above an urban area are actually pretty risky! They will crash and people will be killed.
I definitely want a dope future with flying cars and terra formed planets and everything. Pretty sure this is true of most people on LW. Technological progress is great, and AI is a large outlier in the relative risks and benefits.
...just because some random tribal line is sometimes being drawn in some social scene doesn’t mean it carves reality at its joints. Nothing you said is any argument that progress is bad, and by default I am a big fan of visions of technological progress. See my longer comment here for more details on my position on thinking about random tribal lines.
Rationality from EY: pause AI for 30 years at the level of GPT-4. Nuclear war for defectors. After 30 years, very slow, careful adoption and slow forward progress for anything you need AI to accomplish. Very similar world to today.
e/acc : AI is now proven feasible, race to exploit it as fast as possible, if it results in human extinction that’s what the laws of physics wanted. Many negative outcomes are the cost of progress*, similar to the 1960s in the USA. Geohot will expressly mention this when plotting the slowing of progress in the 1970s.
eg asbestos, still one of the most fireproof substances found.
If you thought reality makes { flying cars, futuristic cities, fusion vtol shuttles, terra forming, life extension to see this in person } require AI, for the reason that humans have not made much progress in these domains without it in 50+ years, and these things are all incredibly expensive, doesn’t this “carve reality at the joints”?
Without the intelligent robotic labor you need AI for, it won’t happen, in the same way humans would not have reached this point without engines.
Am I using “carving reality” wrong in this context, ignoring the details of the way I think future technology all has AI as a pre-requisite?
I think there was a miscommunication; I don’t mean to say that you are inaccurately describing some tribal lines that were going on in some other social scene, I mean that I have little faith that these tribal lines will be a remotely accurate guide to good ideas. Like national politics in the US which has two parties, I think it is wrong to think that one party has all the questions exactly right and the other party has all the questions exactly wrong, there are too many other social and political forces on where tribal lines are drawn for that to be a feasible outcome, and I think that when asking yourself a specific question it’s better to just look at how reality pertains to that question (e.g. “What is the optimal tax rate? Which societies have done better and worse with different rates? What makes sense ethically from first principles?”) rather than asking what different tribes say about it.
I think future technology all has AI as a pre-requisite?
My high conviction hot take goes further: I think all positive future timelines have AI as a pre-requisite. I expect that, sans AI, our future—our immediate future: decades, not centuries—is going to be the ugliest, and last, chapter in our civilization’s history.
The cover art. It looks like something e/acc would use as their cover. Rapid progress is their vision. Flying vehicles above an urban area are actually pretty risky! They will crash and people will be killed.
I definitely want a dope future with flying cars and terra formed planets and everything. Pretty sure this is true of most people on LW. Technological progress is great, and AI is a large outlier in the relative risks and benefits.
the only disagreements are about how to get there
...just because some random tribal line is sometimes being drawn in some social scene doesn’t mean it carves reality at its joints. Nothing you said is any argument that progress is bad, and by default I am a big fan of visions of technological progress. See my longer comment here for more details on my position on thinking about random tribal lines.
I thought the “tribal” line was:
Rationality from EY: pause AI for 30 years at the level of GPT-4. Nuclear war for defectors. After 30 years, very slow, careful adoption and slow forward progress for anything you need AI to accomplish. Very similar world to today.
e/acc : AI is now proven feasible, race to exploit it as fast as possible, if it results in human extinction that’s what the laws of physics wanted. Many negative outcomes are the cost of progress*, similar to the 1960s in the USA. Geohot will expressly mention this when plotting the slowing of progress in the 1970s.
eg asbestos, still one of the most fireproof substances found.
If you thought reality makes { flying cars, futuristic cities, fusion vtol shuttles, terra forming, life extension to see this in person } require AI, for the reason that humans have not made much progress in these domains without it in 50+ years, and these things are all incredibly expensive, doesn’t this “carve reality at the joints”?
Without the intelligent robotic labor you need AI for, it won’t happen, in the same way humans would not have reached this point without engines.
Am I using “carving reality” wrong in this context, ignoring the details of the way I think future technology all has AI as a pre-requisite?
I think there was a miscommunication; I don’t mean to say that you are inaccurately describing some tribal lines that were going on in some other social scene, I mean that I have little faith that these tribal lines will be a remotely accurate guide to good ideas. Like national politics in the US which has two parties, I think it is wrong to think that one party has all the questions exactly right and the other party has all the questions exactly wrong, there are too many other social and political forces on where tribal lines are drawn for that to be a feasible outcome, and I think that when asking yourself a specific question it’s better to just look at how reality pertains to that question (e.g. “What is the optimal tax rate? Which societies have done better and worse with different rates? What makes sense ethically from first principles?”) rather than asking what different tribes say about it.
My high conviction hot take goes further: I think all positive future timelines have AI as a pre-requisite. I expect that, sans AI, our future—our immediate future: decades, not centuries—is going to be the ugliest, and last, chapter in our civilization’s history.