If you’re actually interested in the answer to the question you describe yourself as wondering about, you might consider setting up a poll.
Conversely, if you’re actually interested in expressing the belief that Holden is essentially correct while phrasing it as a rhetorical question for the usual reasons, then a poll isn’t at all necessary.
I’m going to read between the lines a little, and assume that “Holden is essentially correct” here means roughly that donating money to SI doesn’t significantly reduce human existential risk. (Holden says a lot of stuff, some of which I agree with more than others.) I’m >.9 confident that’s true. Holden’s post hasn’t significantly altered my confidence of that.
Well, he estimated the expected effect on risk as insignificant increase of risk. That is to me the strong point; the ‘does not reduce’ is a weak version prone to eliciting Pascal’s wager type response.
I am >.9 confident that donating money to SI doesn’t significantly increase human existential risk.
(Edit: Which, on second read, I guess means I agree with Holden as you summarize him here. At least, the difference between “A doesn’t significantly affect B” and “A insignificantly affects B” seems like a difference I ought not care about.)
I also think Pascal’s Wager type arguments are silly. More precisely, given how unreliable human intuition is when dealing with very low probabilities and when dealing with very large utilities/disutilities, I think lines of reasoning that rely on human intuitions about very large very-low-probability utility shifts are unlikely to be truth-preserving.
On that, I’m pretty sure that the SI would not rush that way. Consider the parable of the dragon. This isn’t the story of someone who’s willing to cut corners, but of someone who accepts that delays for checking, even delays that cause people to die, are necessary.
Plus, if they develop a clear enough architecture, so one can query what the AI is thinking, then one would be able to see potential future failures while still in testing, without having to have those contingencies actually occur. That will be one of the keys, I think. Make the AI’s reasons something that we can follow, even if we couldn’t generate those arguments on a reasonable time-frame.
If you’re actually interested in the answer to the question you describe yourself as wondering about, you might consider setting up a poll.
Conversely, if you’re actually interested in expressing the belief that Holden is essentially correct while phrasing it as a rhetorical question for the usual reasons, then a poll isn’t at all necessary.
Well, maybe it is poorly worded, I’d rather also know who here thinks that Holden is essentially correct.
What probability would you give to Holden being essentially correct? Why?
I’m going to read between the lines a little, and assume that “Holden is essentially correct” here means roughly that donating money to SI doesn’t significantly reduce human existential risk. (Holden says a lot of stuff, some of which I agree with more than others.) I’m >.9 confident that’s true. Holden’s post hasn’t significantly altered my confidence of that.
Why do you want to know?
Well, he estimated the expected effect on risk as insignificant increase of risk. That is to me the strong point; the ‘does not reduce’ is a weak version prone to eliciting Pascal’s wager type response.
I am >.9 confident that donating money to SI doesn’t significantly increase human existential risk.
(Edit: Which, on second read, I guess means I agree with Holden as you summarize him here. At least, the difference between “A doesn’t significantly affect B” and “A insignificantly affects B” seems like a difference I ought not care about.)
I also think Pascal’s Wager type arguments are silly. More precisely, given how unreliable human intuition is when dealing with very low probabilities and when dealing with very large utilities/disutilities, I think lines of reasoning that rely on human intuitions about very large very-low-probability utility shifts are unlikely to be truth-preserving.
Why do you want to know?
On that, I’m pretty sure that the SI would not rush that way. Consider the parable of the dragon. This isn’t the story of someone who’s willing to cut corners, but of someone who accepts that delays for checking, even delays that cause people to die, are necessary.
Plus, if they develop a clear enough architecture, so one can query what the AI is thinking, then one would be able to see potential future failures while still in testing, without having to have those contingencies actually occur. That will be one of the keys, I think. Make the AI’s reasons something that we can follow, even if we couldn’t generate those arguments on a reasonable time-frame.