it’s easy to imagine new data emerging over the next decades that makes it clear that pursuit of whole-brain emulation (or some currently unimagined strategy) is a far more effective strategy for existential risk reduction than Friendly AI research.
Not with the whole-brain emulation for me. That would represent an incredible miracle, IMO.
At present it looks to me like a positive singularity is substantially more likely to occur starting with whole-brain emulation than with Friendly AI.
The other way around, surely! Machine intelligence is much more likely to come first, and be significant. Few carried on with bird scanning and emulation once we had working planes. Essentially, the whole project went nowhere.
In general, when one individual asserts that something seems very likely to them it isn’t helpful to simply assert that the opposite seems extremely likely without giving some minimal reasoning for why you think that will be the case.
Your comment is a puzzling one. You are apparently advising me to offer more assistance. Right—but isn’t that up to me? You can’t possibly analyse the budgets I have to allocate for such things remotely.
Let me put it this way then: Most LW readers don’t like reading unproductive conversations. And it is hard to get more unproductive than one person saying “I believe in X!” and another saying “Yeah, well I believe ~X, so there!” You are welcome to do that, but don’t be surprised if the rest of us decide to vote down such comments as things we don’t want.
Not with the whole-brain emulation for me. That would represent an incredible miracle, IMO.
The other way around, surely! Machine intelligence is much more likely to come first, and be significant. Few carried on with bird scanning and emulation once we had working planes. Essentially, the whole project went nowhere.
In general, when one individual asserts that something seems very likely to them it isn’t helpful to simply assert that the opposite seems extremely likely without giving some minimal reasoning for why you think that will be the case.
For more, see: http://alife.co.uk/essays/against_whole_brain_emulation/
Your comment is a puzzling one. You are apparently advising me to offer more assistance. Right—but isn’t that up to me? You can’t possibly analyse the budgets I have to allocate for such things remotely.
Let me put it this way then: Most LW readers don’t like reading unproductive conversations. And it is hard to get more unproductive than one person saying “I believe in X!” and another saying “Yeah, well I believe ~X, so there!” You are welcome to do that, but don’t be surprised if the rest of us decide to vote down such comments as things we don’t want.