OK, what’s YOUR position, and how much do you know? Then Yvain can dump historical facts on you, and we’ll see how far you shift and in what direction.
Israel/Palestine is a significant global risk. Their squabling and fundamentalism could easily escalate to kill us all
Therefore, I am for peace in the Middle east irrespective of which faction gains most through that peace.
This is quite a utilitarian position. But that isn’t much of a problem for me as my emotional involvement is pretty low. I can afford to be cool and calculating about this one. What do I know? Mostly facts gained through casual Wikipedia’ing.
Israel is more competent than the Arabs, again and again they have proved to be the side with the most intelligence and military effectiveness. E.g. Yom-Kippur, Osiraq, etc.
That does not mean that Israel are all nice guys.
Nor does it mean that the Arab nations are nice guys
For me, living in an Arab country would be hell. They disvalue freedom, equality, rational secular enlightenment values, knowledge—basically everything I stand for. I am therefore weakly incentivized to make sure that the Arabic/Islamic culture complex doesn’t get too powerful.
Israeli secret services etc are creepy. They kidnap people. Not cool. But overall this seems to be balanced by the fact that Israel contains a lot of people I would probably like—people who share my values.
This is indeed a pretty utilitarian position. I think the objection you’re likely to run into is that by evaluating the situation purely in terms of the present, it sweeps historic precedents under the rug.
Put another way, the “this conflict represents a risk, let’s just cool it” argument can just as easily be made by any aggressor directly after initiating the conflict.
If I expected the current geopolitical situation to continue for a long time, I would agree. But neither of us do; we both place a high probability on either FAI or uFAI within 100 years; the top priority is to just survive that long.
Also, even if you assign some probability to no singularity any time soon, the expected rewards for a situation where there is a singularity soon are higher, as you get to live for a lot longer, so you should care more about that possibility.
(I yesterday heard someone who ought to know say AI at human level, and not provably friendly, in 16 years. Yes my jaw hit the floor too.)
I hadn’t thought of the “park it, we have bigger problems”, or “park it, Omega will fix it” approach, but it might make sense. That raises the question, and I hope it’s not treading to far into off-LW-topic: to what extent ought a reasoning person act as if they expected a gradual and incremental change in the status quo, and to what extent ought their planning to be dominated by expectation of large disruptions in the near future?
Well, if you actually believe the kinds of predictions that say the singularity is coming within your lifetime, you should expect the status quo to change. If you don’t, then I’d be interested to hear your argument as to why not.
The question I was struggling to articulate was more like: should I give credence to my own beliefs? How much? And how to deal with instinct that doesn’t want to put AI and postmen in the same category of “real”?
If you don’t give credence to them … Then they’re not your beliefs! If you go to Transhumanist events, profess to believe that a singularity is likely in 20 years, but then when someone extracts concrete actions you should take in your own life that would be advantageous if and only if the singularity hypothesis is true, and you feel hesitant, then you don’t really believe it.
Shane expressed this opinion to me too. I think that he needs to be more probabilistic with his predictions, i.e. give a probability distribution. He didn’t adequately answer all of my objections about why neuro-inspired ai will arrive so soon.
From what he explained, the job of reverse engineering a biological mind is looking much easier than expected—there’s no need to grovel around at the level of single neurons, since the functional units are bunches of neurons, and they implement algorithms that are recognizable from conventional AI.
Yes, but when we got into detail about how this might work and what the difficulties might be, I had some significant objections that weren’t answered.
I think it would make an interesting group effort to try and estimate the speed of neuro research to get some idea of how fast we can expect neuro-inspired AI.
I’m going to try and figure out the number of researcher working on figuring out the algorithms for long term changes to neural organisation (LTP, neuro plasticity and neuro genesis). I get the feeling it is a lot less than those working on figuring out short term functionality, but I’m not an expert and not submerged in the field.
OK, what’s YOUR position, and how much do you know? Then Yvain can dump historical facts on you, and we’ll see how far you shift and in what direction.
So, my position is:
Israel/Palestine is a significant global risk. Their squabling and fundamentalism could easily escalate to kill us all
Therefore, I am for peace in the Middle east irrespective of which faction gains most through that peace.
This is quite a utilitarian position. But that isn’t much of a problem for me as my emotional involvement is pretty low. I can afford to be cool and calculating about this one. What do I know? Mostly facts gained through casual Wikipedia’ing.
Israel is more competent than the Arabs, again and again they have proved to be the side with the most intelligence and military effectiveness. E.g. Yom-Kippur, Osiraq, etc.
That does not mean that Israel are all nice guys.
Nor does it mean that the Arab nations are nice guys
For me, living in an Arab country would be hell. They disvalue freedom, equality, rational secular enlightenment values, knowledge—basically everything I stand for. I am therefore weakly incentivized to make sure that the Arabic/Islamic culture complex doesn’t get too powerful.
Israeli secret services etc are creepy. They kidnap people. Not cool. But overall this seems to be balanced by the fact that Israel contains a lot of people I would probably like—people who share my values.
This is indeed a pretty utilitarian position. I think the objection you’re likely to run into is that by evaluating the situation purely in terms of the present, it sweeps historic precedents under the rug.
Put another way, the “this conflict represents a risk, let’s just cool it” argument can just as easily be made by any aggressor directly after initiating the conflict.
Yup. If you don’t punish aggressors and just demand “peace at any price” once the war starts, that peace sure won’t last long.
If I expected the current geopolitical situation to continue for a long time, I would agree. But neither of us do; we both place a high probability on either FAI or uFAI within 100 years; the top priority is to just survive that long.
Also, even if you assign some probability to no singularity any time soon, the expected rewards for a situation where there is a singularity soon are higher, as you get to live for a lot longer, so you should care more about that possibility.
(I yesterday heard someone who ought to know say AI at human level, and not provably friendly, in 16 years. Yes my jaw hit the floor too.)
I hadn’t thought of the “park it, we have bigger problems”, or “park it, Omega will fix it” approach, but it might make sense. That raises the question, and I hope it’s not treading to far into off-LW-topic: to what extent ought a reasoning person act as if they expected a gradual and incremental change in the status quo, and to what extent ought their planning to be dominated by expectation of large disruptions in the near future?
Well, if you actually believe the kinds of predictions that say the singularity is coming within your lifetime, you should expect the status quo to change. If you don’t, then I’d be interested to hear your argument as to why not.
The question I was struggling to articulate was more like: should I give credence to my own beliefs? How much? And how to deal with instinct that doesn’t want to put AI and postmen in the same category of “real”?
If you don’t give credence to them … Then they’re not your beliefs! If you go to Transhumanist events, profess to believe that a singularity is likely in 20 years, but then when someone extracts concrete actions you should take in your own life that would be advantageous if and only if the singularity hypothesis is true, and you feel hesitant, then you don’t really believe it.
Who on Earth do you think ought to know that?
Shane Legg, who was at London LW meetup.
Shane expressed this opinion to me too. I think that he needs to be more probabilistic with his predictions, i.e. give a probability distribution. He didn’t adequately answer all of my objections about why neuro-inspired ai will arrive so soon.
From what he explained, the job of reverse engineering a biological mind is looking much easier than expected—there’s no need to grovel around at the level of single neurons, since the functional units are bunches of neurons, and they implement algorithms that are recognizable from conventional AI.
This sounds like a statement made by some hopeful neuromodeler looking for funding rather than a known truth of science.
You want the details? Ask the pirate, not the parrot.
Rawwrk. Pieces of eight.
Yes, but when we got into detail about how this might work and what the difficulties might be, I had some significant objections that weren’t answered.
I think it would make an interesting group effort to try and estimate the speed of neuro research to get some idea of how fast we can expect neuro-inspired AI.
I’m going to try and figure out the number of researcher working on figuring out the algorithms for long term changes to neural organisation (LTP, neuro plasticity and neuro genesis). I get the feeling it is a lot less than those working on figuring out short term functionality, but I’m not an expert and not submerged in the field.
Please do; this sounds extremely valuable.
I would do this with shane: but I think it might be off topic at the moment.
Ja, going off-topic.
Are you sure you’re not playing “a deeply wise person doesn’t pick sides, but scolds both for fighting”?
Maybe. Though, I am not consciously doing this. See my above response to EY.