If I expected the current geopolitical situation to continue for a long time, I would agree. But neither of us do; we both place a high probability on either FAI or uFAI within 100 years; the top priority is to just survive that long.
Also, even if you assign some probability to no singularity any time soon, the expected rewards for a situation where there is a singularity soon are higher, as you get to live for a lot longer, so you should care more about that possibility.
(I yesterday heard someone who ought to know say AI at human level, and not provably friendly, in 16 years. Yes my jaw hit the floor too.)
I hadn’t thought of the “park it, we have bigger problems”, or “park it, Omega will fix it” approach, but it might make sense. That raises the question, and I hope it’s not treading to far into off-LW-topic: to what extent ought a reasoning person act as if they expected a gradual and incremental change in the status quo, and to what extent ought their planning to be dominated by expectation of large disruptions in the near future?
Well, if you actually believe the kinds of predictions that say the singularity is coming within your lifetime, you should expect the status quo to change. If you don’t, then I’d be interested to hear your argument as to why not.
The question I was struggling to articulate was more like: should I give credence to my own beliefs? How much? And how to deal with instinct that doesn’t want to put AI and postmen in the same category of “real”?
If you don’t give credence to them … Then they’re not your beliefs! If you go to Transhumanist events, profess to believe that a singularity is likely in 20 years, but then when someone extracts concrete actions you should take in your own life that would be advantageous if and only if the singularity hypothesis is true, and you feel hesitant, then you don’t really believe it.
Shane expressed this opinion to me too. I think that he needs to be more probabilistic with his predictions, i.e. give a probability distribution. He didn’t adequately answer all of my objections about why neuro-inspired ai will arrive so soon.
From what he explained, the job of reverse engineering a biological mind is looking much easier than expected—there’s no need to grovel around at the level of single neurons, since the functional units are bunches of neurons, and they implement algorithms that are recognizable from conventional AI.
Yes, but when we got into detail about how this might work and what the difficulties might be, I had some significant objections that weren’t answered.
I think it would make an interesting group effort to try and estimate the speed of neuro research to get some idea of how fast we can expect neuro-inspired AI.
I’m going to try and figure out the number of researcher working on figuring out the algorithms for long term changes to neural organisation (LTP, neuro plasticity and neuro genesis). I get the feeling it is a lot less than those working on figuring out short term functionality, but I’m not an expert and not submerged in the field.
If I expected the current geopolitical situation to continue for a long time, I would agree. But neither of us do; we both place a high probability on either FAI or uFAI within 100 years; the top priority is to just survive that long.
Also, even if you assign some probability to no singularity any time soon, the expected rewards for a situation where there is a singularity soon are higher, as you get to live for a lot longer, so you should care more about that possibility.
(I yesterday heard someone who ought to know say AI at human level, and not provably friendly, in 16 years. Yes my jaw hit the floor too.)
I hadn’t thought of the “park it, we have bigger problems”, or “park it, Omega will fix it” approach, but it might make sense. That raises the question, and I hope it’s not treading to far into off-LW-topic: to what extent ought a reasoning person act as if they expected a gradual and incremental change in the status quo, and to what extent ought their planning to be dominated by expectation of large disruptions in the near future?
Well, if you actually believe the kinds of predictions that say the singularity is coming within your lifetime, you should expect the status quo to change. If you don’t, then I’d be interested to hear your argument as to why not.
The question I was struggling to articulate was more like: should I give credence to my own beliefs? How much? And how to deal with instinct that doesn’t want to put AI and postmen in the same category of “real”?
If you don’t give credence to them … Then they’re not your beliefs! If you go to Transhumanist events, profess to believe that a singularity is likely in 20 years, but then when someone extracts concrete actions you should take in your own life that would be advantageous if and only if the singularity hypothesis is true, and you feel hesitant, then you don’t really believe it.
Who on Earth do you think ought to know that?
Shane Legg, who was at London LW meetup.
Shane expressed this opinion to me too. I think that he needs to be more probabilistic with his predictions, i.e. give a probability distribution. He didn’t adequately answer all of my objections about why neuro-inspired ai will arrive so soon.
From what he explained, the job of reverse engineering a biological mind is looking much easier than expected—there’s no need to grovel around at the level of single neurons, since the functional units are bunches of neurons, and they implement algorithms that are recognizable from conventional AI.
This sounds like a statement made by some hopeful neuromodeler looking for funding rather than a known truth of science.
You want the details? Ask the pirate, not the parrot.
Rawwrk. Pieces of eight.
Yes, but when we got into detail about how this might work and what the difficulties might be, I had some significant objections that weren’t answered.
I think it would make an interesting group effort to try and estimate the speed of neuro research to get some idea of how fast we can expect neuro-inspired AI.
I’m going to try and figure out the number of researcher working on figuring out the algorithms for long term changes to neural organisation (LTP, neuro plasticity and neuro genesis). I get the feeling it is a lot less than those working on figuring out short term functionality, but I’m not an expert and not submerged in the field.
Please do; this sounds extremely valuable.
I would do this with shane: but I think it might be off topic at the moment.
Ja, going off-topic.