See also: https://www.lesswrong.com/posts/zSNLvRBhyphwuYdeC/ai-86-just-think-of-the-potential -- @Zvi
“The result is a mostly good essay called Machines of Loving Grace, outlining what can be done with ‘powerful AI’ if we had years of what was otherwise relative normality to exploit it in several key domains, and we avoided negative outcomes and solved the control and alignment problems...”
“This essay wants to assume the AIs are aligned to us and we remain in control without explaining why and how that occured, and then fight over whether the result is democratic or authoritarian.”
“Thus the whole discussion here feels bizarre, something between burying the lede and a category error.”
″...the more concrete Dario’s discussions become, the more this seems to be a ‘AI as mere tool’ world, despite that AI being ‘powerful.’ Which I note because it is, at minimum, one hell of an assumption to have in place ‘because of reasons.’”
“Assuming you do survive powerful AI, you will survive because of one of three things.
You and your allies have and maintain control over resources.
You sell valuable services that people want humans to uniquely provide.
Collectively we give you an alternative path to acquire the necessary resources.
That’s it.”
Suggested spelling corrections:
I predict that the superforcasters in the report took
a lot of empirical evidence for climate stuff
and it may or may not be the case
There are also no easy rules that
meaning that we should see persistence from past events
I also feel these kinds of linear extrapolation
and really quite a lot of empirical evidence
are many many times more infectious
engineered virus that spreads like the measles or covid
case studies on weather there are breakpoints in technological development
break that trend extrapolation wouldn’t have predicted
It’s very vulnerable to references class and
impressed by superforecaster track records than you are.