SIAI position does dot require “obviously X” from a decision perspective, the opposite one does. To be so sure of something as complicated as the timeline of FAI math vs AGI development seems seriously foolish to me.
It is not a matter about being sure of it but to weigh it against what is asked for in return, other possible events of equal probability and the utility payoff from spending the resources on something else entirely.
I’m not asking the SIAI to prove “obviously X” but rather to prove the very probability of X that they are claiming it has within the larger context of possibilities.
Capa: It’s the problem right there. Between the boosters and the gravity of the sun the velocity of the payload will get so great that space and time will become smeared together and everything will distort. Everything will be unquantifiable.
Kaneda: You have to come down on one side or the other. I need a decision.
Capa: It’s not a decision, it’s a guess. It’s like flipping a coin and asking me to decide whether it will be heads or tails.
Kaneda: And?
Capa: Heads… We harvested all Earth’s resources to make this payload. This is humanity’s last chance… our last, best chance… Searle’s argument is sound. Two last chances are better than one.
Not being able to calculate chances does not excuse one from using their best de-biased neural machinery to make a guess at a range. IMO 50 years is reasonable (I happen to know something about the state of AI research outside of the FAI framework). I would not roll over in surprise if it’s 5 years given state of certain technologies.
(I happen to know something about the state of AI research outside of the FAI framework). I would not roll over in surprise if it’s 5 years given state of certain technologies.
I’m curious, because I like to collect this sort of data: what is your median estimate?
(If you don’t want to say because you don’t want to defend a specific number or list off a thousand disclaimers I completely understand.)
SIAI position does dot require “obviously X” from a decision perspective, the opposite one does. To be so sure of something as complicated as the timeline of FAI math vs AGI development seems seriously foolish to me.
It is not a matter about being sure of it but to weigh it against what is asked for in return, other possible events of equal probability and the utility payoff from spending the resources on something else entirely.
I’m not asking the SIAI to prove “obviously X” but rather to prove the very probability of X that they are claiming it has within the larger context of possibilities.
No such proof is possible with our machinery.
=======================================================
Capa: It’s the problem right there. Between the boosters and the gravity of the sun the velocity of the payload will get so great that space and time will become smeared together and everything will distort. Everything will be unquantifiable.
Kaneda: You have to come down on one side or the other. I need a decision.
Capa: It’s not a decision, it’s a guess. It’s like flipping a coin and asking me to decide whether it will be heads or tails.
Kaneda: And?
Capa: Heads… We harvested all Earth’s resources to make this payload. This is humanity’s last chance… our last, best chance… Searle’s argument is sound. Two last chances are better than one.
=====================================================
(Sunshine 2007)
Not being able to calculate chances does not excuse one from using their best de-biased neural machinery to make a guess at a range. IMO 50 years is reasonable (I happen to know something about the state of AI research outside of the FAI framework). I would not roll over in surprise if it’s 5 years given state of certain technologies.
I’m curious, because I like to collect this sort of data: what is your median estimate?
(If you don’t want to say because you don’t want to defend a specific number or list off a thousand disclaimers I completely understand.)
Median 15-20 years. I’m not really an expert, but certain technologies are coming really close to modeling cognition as I understand it.
Thanks!