Downvoted, because even though I think this is reasonable point worth considering, I’m not excited about a LessWrong dominated by snarky memes, that make points, instead of essays.
I started an essay version, but decided the meme version was concise without much loss of detail. I see your point though. I’ll go ahead and remove my upvote of this and post the essay instead.
It seems like I maybe would have gotten the same value from the essay (which would have taken 5 minutes to read?) as from this image (which maybe took 5 seconds).
But I don’t want to create a culture that rewards snark, even more than it already does. It seems like that is the death of discourse, in a bunch of communities.
So I’m interested in if there are ways to get the benefits here, without the costs.
Nate’s attempted rephrasing: EY’s model might not be confident that there’s not big GDP boosts, but it does seem pretty confident that there isn’t some “half-capable” window between the shallow-pattern-memorizer stuff and the scary-laserlike-consequentialist stuff, and in particular Eliezer seems confident humanity won’t slowly traverse that capability regime
In particular, I would hope that—in unlikely cases where we survive at all—we were able to survive by operating a superintelligence only in the lethally dangerous, but still less dangerous, regime of “engineering nanosystems”.
Whereas “solve alignment for us” seems to require operating in the even more dangerous regimes of “write AI code for us” and “model human psychology in tremendous detail”.
It won’t be slow and messy once we’re out of the atmosphere, my models do say. But my models at least permit—though they do not desperately, loudly insist—that we could end up with weird half-able AGIs affecting the Earth for an extended period.
There are people and organizations who will figure out how to sell AI anime waifus without that being successfully regulated, but it’s not obvious to me that AI anime waifus feed back into core production cycles.
When it comes to core production cycles the current world has more issues that look like “No matter what technology you have, it doesn’t let you build a house” and places for the larger production cycle to potentially be bottlenecked or interrupted.
I suspect that the main economic response to this is that entrepreneurs chase the 140 characters instead of the flying cars—people will gravitate to places where they can sell non-core AI goods for lots of money, rather than tackling the challenge of finding an excess demand in core production cycles which it is legal to meet via AI.
Even if some tackle core production cycles, it’s going to take them a lot longer to get people to buy their newfangled gadgets than it’s going to take to sell AI anime waifus; the world may very well end while they’re trying to land their first big contract for letting an AI lay bricks.
Physics is continuous but it doesn’t always yield things that “look smooth to a human brain”. Some kinds of processes converge to continuity in strong ways where you can throw discontinuous things in them and they still end up continuous, which is among the reasons why I expect world GDP to stay on trend up until the world ends abruptly; because world GDP is one of those things that wants to stay on a track, and an AGI building a nanosystem can go off that track without being pushed back onto it.
Perhaps those could be operationalised as a an unconditional and a conditional statement: Unconditional on everything, we expect very fast takeoff + takeover with advanced technology, conditional on that not happening, we will still be surprised by AI because TAI will not happen because of regulation before these systems completely take over.
Downvoted, because even though I think this is reasonable point worth considering, I’m not excited about a LessWrong dominated by snarky memes, that make points, instead of essays.
I started an essay version, but decided the meme version was concise without much loss of detail. I see your point though. I’ll go ahead and remove my upvote of this and post the essay instead.
I do appreciate the conciseness a lot.
It seems like I maybe would have gotten the same value from the essay (which would have taken 5 minutes to read?) as from this image (which maybe took 5 seconds).
But I don’t want to create a culture that rewards snark, even more than it already does. It seems like that is the death of discourse, in a bunch of communities.
So I’m interested in if there are ways to get the benefits here, without the costs.
What about essay first, the image at the bottom?
Agreed.
Can you give an example of somebody making that move?
I got the impression of this happening on the side of MIRI in the 2021 conversations.
Soares 14:43:
Yudkowsky 11:16:
Yudkowky 11:41:
Yudkowsky 11:09:
Yudkowsky 17:01:
Perhaps those could be operationalised as a an unconditional and a conditional statement: Unconditional on everything, we expect very fast takeoff + takeover with advanced technology, conditional on that not happening, we will still be surprised by AI because TAI will not happen because of regulation before these systems completely take over.