Well, I am fairly sure DL+RL will not lead to HLAI, on any reasonable timescale that would matter to us. You are not sure. Seems to me, we could turn this into a bet. Any sort of bet where you say DL+RL → HLAI after X years, I will probably take the negation of, gladly.
Hmmm...but if I win the bet then the world may be destroyed, or our environment could change so much the money will become worthless. Would you take 20:1 odds that there won’t be DL+RL-based HLAI in 25 years?
If you think money will be worth a lot now but not much in the future, Ilya could pay you money now in exchange for you paying him a lot of money in the future.
I often hear this response: “I can’t make bets on my beliefs about the Eschaton, because they are about the Eschaton.”
My response to this response is: you have left the path of empiricism if you can’t translate your insight into [topic] (in this case “AI progress”) into taking money via {bets with empirically verifiable outcomes} from folks without your insight.
---
If you are worried the world will change too much in 25 years, can you formulate a nearer-term bet you would be happy with? For example, something non-toy DL+RL would do in 5 years.
“I can’t make bets on my beliefs about the Eschaton, because they are about the Eschaton.” -- Well, it makes sense. Besides, I did offer you a bet taking into account a) that the money may be worth less in my branch b) I don’t think DL + RL AGI is more likely than not, just plausible. If you’re more than 96% certain there will be no such AI, 20:1 odds are a good deal.
But anyways, I would be fine with betting on a nearer-term challenge. How about—in 5 years, a bipedal robot that can run on rough terrain, as in this video, using a policy learned from scratch by DL + RL(possibly including a simulated environment during training) 1:1 odds.
Have something in mind?
Well, I am fairly sure DL+RL will not lead to HLAI, on any reasonable timescale that would matter to us. You are not sure. Seems to me, we could turn this into a bet. Any sort of bet where you say DL+RL → HLAI after X years, I will probably take the negation of, gladly.
Hmmm...but if I win the bet then the world may be destroyed, or our environment could change so much the money will become worthless. Would you take 20:1 odds that there won’t be DL+RL-based HLAI in 25 years?
If you think money will be worth a lot now but not much in the future, Ilya could pay you money now in exchange for you paying him a lot of money in the future.
I often hear this response: “I can’t make bets on my beliefs about the Eschaton, because they are about the Eschaton.”
My response to this response is: you have left the path of empiricism if you can’t translate your insight into [topic] (in this case “AI progress”) into taking money via {bets with empirically verifiable outcomes} from folks without your insight.
---
If you are worried the world will change too much in 25 years, can you formulate a nearer-term bet you would be happy with? For example, something non-toy DL+RL would do in 5 years.
“I can’t make bets on my beliefs about the Eschaton, because they are about the Eschaton.” -- Well, it makes sense. Besides, I did offer you a bet taking into account a) that the money may be worth less in my branch b) I don’t think DL + RL AGI is more likely than not, just plausible. If you’re more than 96% certain there will be no such AI, 20:1 odds are a good deal.
But anyways, I would be fine with betting on a nearer-term challenge. How about—in 5 years, a bipedal robot that can run on rough terrain, as in this video, using a policy learned from scratch by DL + RL(possibly including a simulated environment during training) 1:1 odds.
No, that wouldn’t surprise me in 5 years. Nor would that count as “scary progress” to me. That’s bipedalism, not strides towards general intelligence.
---
“Well, it makes sense.”
That makes your beliefs a religion, my friend.