Even in that example, all “hostile superintelligences” are prevented from existing (with acceptable collateral damage). Even though alignment may take more years/decades to solve in this scenario, it’s a much safer environment to do so.
Although, these hypotheticals are unlikely (their purpose is pedagogical). It’s likely due to my ignorance, but I am unaware of any pivotal acts attached to anyone’s research agenda.
Even though alignment may take more years/decades to solve in this scenario, it’s a much safer environment to do so.
It seems safer, but I’m not sure about “much safer”. You now have an extremely powerful AI that takes human commands, lots of people and governments would want to get their hands on it, and geopolitics is highly destabilized due to your unilateral actions. What are your next steps to ensure continued safety?
Although, these hypotheticals are unlikely (their purpose is pedagogical). It’s likely due to my ignorance, but I am unaware of any pivotal acts attached to anyone’s research agenda.
I think the examples in that Arbital post are actually intended to be realistic examples (i.e., something that MIRI or at least Eliezer would consider doing if they managed to build a safe and powerful task AGI). If you have reason to think otherwise, please explain.
It seems safer, but I’m not sure about “much safer”. You now have an extremely powerful AI that takes human commands, lots of people and governments would want to get their hands on it, and geopolitics is highly destabilized due to your unilateral actions. What are your next steps to ensure continued safety?
Anything that “decisively settles a win or loss, or drastically changes the probability of win or loss, or changes the future conditions under which a win or loss is determined” qualifies as a pivotal event. If you’re arguing that this specific example doesn’t change the probability of winning enough (and you do bring up good points!), then this example might not qualify as a pivotal event.
I think the examples in that Arbital post are actually intended to be realistic examples (i.e., something that MIRI or at least Eliezer would consider doing if they managed to build a safe and powerful task AGI). If you have reason to think otherwise, please explain.
My initial objection: Considering the upload pivotal event, how likely is it that the first pivotal event is uploading alignment researchers? Multiply that by the probability that alignment researchers have access to the first task AGI capable of uploading. (I’m equating “realistic” with “likely”)
Though by this logic, the most realistic/likely pivotal event is the one that requires the least amount of absolute and relative advantage, and all other pivotal events are “unrealistic”. For example, uploading and shutting down hostile AGI requires a certain level of capability and relative advantage (the uploading example assumes you’re the first to gain uploading capabilities), but those two examples probably aren’t the best pivotal event for the smallest capability advantage.
So my definition of “realistic pivotal event” might not be useful since the only events that could qualify are the top 100 pivotal events (rated by least capability advantage required), and coming up with 1 of those pivotal events may very well require an AGI.
Even in that example, all “hostile superintelligences” are prevented from existing (with acceptable collateral damage). Even though alignment may take more years/decades to solve in this scenario, it’s a much safer environment to do so.
Although, these hypotheticals are unlikely (their purpose is pedagogical). It’s likely due to my ignorance, but I am unaware of any pivotal acts attached to anyone’s research agenda.
It seems safer, but I’m not sure about “much safer”. You now have an extremely powerful AI that takes human commands, lots of people and governments would want to get their hands on it, and geopolitics is highly destabilized due to your unilateral actions. What are your next steps to ensure continued safety?
I think the examples in that Arbital post are actually intended to be realistic examples (i.e., something that MIRI or at least Eliezer would consider doing if they managed to build a safe and powerful task AGI). If you have reason to think otherwise, please explain.
Anything that “decisively settles a win or loss, or drastically changes the probability of win or loss, or changes the future conditions under which a win or loss is determined” qualifies as a pivotal event. If you’re arguing that this specific example doesn’t change the probability of winning enough (and you do bring up good points!), then this example might not qualify as a pivotal event.
My initial objection: Considering the upload pivotal event, how likely is it that the first pivotal event is uploading alignment researchers? Multiply that by the probability that alignment researchers have access to the first task AGI capable of uploading. (I’m equating “realistic” with “likely”)
Though by this logic, the most realistic/likely pivotal event is the one that requires the least amount of absolute and relative advantage, and all other pivotal events are “unrealistic”. For example, uploading and shutting down hostile AGI requires a certain level of capability and relative advantage (the uploading example assumes you’re the first to gain uploading capabilities), but those two examples probably aren’t the best pivotal event for the smallest capability advantage.
So my definition of “realistic pivotal event” might not be useful since the only events that could qualify are the top 100 pivotal events (rated by least capability advantage required), and coming up with 1 of those pivotal events may very well require an AGI.