Did you read the part of the workshop report that talked about this?
this decision, as it’s being made, doesn’t take many out of the pool of those who would make progress towards UFAI
Getting decision theory “right enough” could be important for building a viable UFAI (or at least certain types of it, e.g., non-neuromorphic). There’s reason to think for example that AIXI would fail due to incorrect decision theory (but people trying to make AIXI practical do not seem to realize this yet). Given that we seem to constitute a large portion of all people trying to get decision theory right for AI purposes, the effect of our decisions might be larger than you think.
Alternatively, don’t talk about the results openly, but work anyway
Yes, but of course that reduces the positive effects of working on decision theory, so you might decide that you should do something else instead. For example I think that thinking about strategy and meta-philosophy might be better uses of my time. (Also, I suggest that keeping secrets is very hard so even this alternative of working in secret may be a net negative.)
Did you read the part of the workshop report that talked about this?
Yes, and I agree, but it’s not what I referred to. The essential part of the claim (as I accept it) is that given WBE, there exist scenarios where FAI can be developed much more reliably than in any feasible pre-WBE scenario. At the very least, dominating WBE theoretically allows to spend thousands of subjective years working on the problem, while in pre-WBE mode we have at most 150 and more likely about 50-80 years.
What I was talking about is probability of success. FAI (and FAI theory in particular, as a technology-independent component) is a race against AGI and other disasters, which become more likely as technology develops. In any given time interval, all else equal, completion of an AGI seems significantly more likely than of FAI. It’s this probability of winning vs. losing the race in any given time that I don’t see expected to change with the WBE transition. Just as FAI research gets more time, so is AGI research expected to get more time, unless somehow FAI researchers outrun everyone else for WBE resources, what I called “dominating WBE” above, but that’s an unlikely feat, I don’t have reasons for seeing that as more likely than just solving FAI pre-WBE.
In other words, we have two different low-probability events that bound success pre-WBE and post-WBE: solving the likely-too-difficult problem of FAI in a short time (pre-WBE), and outrunning “competing” AGI projects (post-WBE). If AGI is easy, pre-WBE is more important, because probability of surviving to post-WBE is then low. If AGI is hard, then FAI is hard too, and so we must rely on the post-WBE stage.
The gamble is on uncertainty about how hard FAI and AGI are. If they are very hard, we’ll probably get to the WBE race. Otherwise, it’s worth trying now, just in case it’s possible to solve FAI earlier, or perhaps to develop the theory well enough to gain high-profile claim on dominating WBE and finishing the project before competing risks.
Just as FAI research gets more time, so is AGI research expected to get more time, unless somehow FAI researchers outrun everyone else for WBE resources, what I called “dominating WBE” above, but that’s an unlikely feat, I don’t have reasons for seeing that as more likely than just solving FAI pre-WBE.
In order for FAI to win pre-WBE, FAI has to get more resources than AGI (e.g., more, smarter researchers, computing power), but because FAI is much harder than AGI, it needs a large advantage. The “race for WBE” is better because it’s a fairer one and you may only need to win by a small margin.
Also, if someone (who isn’t necessarily an FAI group to start with) dominates WBE, they have no strong reason to immediately aim for AGI. What does it buy them that they don’t already have? They can take the (subjective) time to think over the situation, and perhaps decide that FAI would be the best way to move forward.
In order for FAI to win pre-WBE, FAI has to get more resources than AGI (e.g., more, smarter researchers, computing power), but because FAI is much harder than AGI, it needs a large advantage. The “race for WBE” is better because it’s a fairer one and you may only need to win by a small margin.
If FAI is much harder, WBE race has more potential for winning than pre-WBE race, but still low probability (getting more resources than all AI efforts is unlikely, and by the time the WBE race even begins, a lot is already lost).
Also, if someone (who isn’t necessarily an FAI group to start with) dominates WBE, they have no strong reason to immediately aim for AGI. What does it buy them that they don’t already have? They can take the (subjective) time to think over the situation, and perhaps decide that FAI would be the best way to move forward.
No strong reason but natural stupidity. This argues for developing enough theory pre-WBE to make deliberate delay in developing AGI respectable/likely to get traction.
Did you read the part of the workshop report that talked about this?
Getting decision theory “right enough” could be important for building a viable UFAI (or at least certain types of it, e.g., non-neuromorphic). There’s reason to think for example that AIXI would fail due to incorrect decision theory (but people trying to make AIXI practical do not seem to realize this yet). Given that we seem to constitute a large portion of all people trying to get decision theory right for AI purposes, the effect of our decisions might be larger than you think.
Yes, but of course that reduces the positive effects of working on decision theory, so you might decide that you should do something else instead. For example I think that thinking about strategy and meta-philosophy might be better uses of my time. (Also, I suggest that keeping secrets is very hard so even this alternative of working in secret may be a net negative.)
Yes, and I agree, but it’s not what I referred to. The essential part of the claim (as I accept it) is that given WBE, there exist scenarios where FAI can be developed much more reliably than in any feasible pre-WBE scenario. At the very least, dominating WBE theoretically allows to spend thousands of subjective years working on the problem, while in pre-WBE mode we have at most 150 and more likely about 50-80 years.
What I was talking about is probability of success. FAI (and FAI theory in particular, as a technology-independent component) is a race against AGI and other disasters, which become more likely as technology develops. In any given time interval, all else equal, completion of an AGI seems significantly more likely than of FAI. It’s this probability of winning vs. losing the race in any given time that I don’t see expected to change with the WBE transition. Just as FAI research gets more time, so is AGI research expected to get more time, unless somehow FAI researchers outrun everyone else for WBE resources, what I called “dominating WBE” above, but that’s an unlikely feat, I don’t have reasons for seeing that as more likely than just solving FAI pre-WBE.
In other words, we have two different low-probability events that bound success pre-WBE and post-WBE: solving the likely-too-difficult problem of FAI in a short time (pre-WBE), and outrunning “competing” AGI projects (post-WBE). If AGI is easy, pre-WBE is more important, because probability of surviving to post-WBE is then low. If AGI is hard, then FAI is hard too, and so we must rely on the post-WBE stage.
The gamble is on uncertainty about how hard FAI and AGI are. If they are very hard, we’ll probably get to the WBE race. Otherwise, it’s worth trying now, just in case it’s possible to solve FAI earlier, or perhaps to develop the theory well enough to gain high-profile claim on dominating WBE and finishing the project before competing risks.
In order for FAI to win pre-WBE, FAI has to get more resources than AGI (e.g., more, smarter researchers, computing power), but because FAI is much harder than AGI, it needs a large advantage. The “race for WBE” is better because it’s a fairer one and you may only need to win by a small margin.
Also, if someone (who isn’t necessarily an FAI group to start with) dominates WBE, they have no strong reason to immediately aim for AGI. What does it buy them that they don’t already have? They can take the (subjective) time to think over the situation, and perhaps decide that FAI would be the best way to move forward.
If FAI is much harder, WBE race has more potential for winning than pre-WBE race, but still low probability (getting more resources than all AI efforts is unlikely, and by the time the WBE race even begins, a lot is already lost).
No strong reason but natural stupidity. This argues for developing enough theory pre-WBE to make deliberate delay in developing AGI respectable/likely to get traction.