Logan Zoellner thank you for highlighting one of your previous points,
You asked me to agree to your speculation that GPT5 will not destroy the world. I will not agree with your speculation because I have no idea if GPT5 will do that or not. This does not mean that I agree with the statement that GPT5 WILL destroy earth. It just means that I do not know.
I would not use Manifold as any data point in assessing the potential danger of future AI.
I would not use Manifold as any data point in assessing the potential danger of future AI.
What would you use instead?
In particular, I’d be interested in knowing what probability you assign to the chance that GPT-5 will destroy the world and how you arrived at that probability.
In my view we need more research, not people that draw inferences on extremely complex matters from what random people without that knowledge bet on a given day. Its maybe fun entertainment, but it does not say anything about anything.
I do not assign any probabilities. To me it is just silly that whole assigning probabilities game surrounding x-risk and AI safety in general. How can anyone say for instance that it is a 10% risk of human extinction. What does that mean? Is that a 1 in 10 chance at a given moment, during a 23.7678 year period, forever or? And most importantly how do you come up with the figure 10%, based on what exactly?
I think it’s quite unlikely that GPT 5 will destroy the world. That said, I think it’s generally reasonable to doubt prediction markets on questions that can’t be fairly evaluated both ways.
Logan Zoellner thank you for highlighting one of your previous points,
You asked me to agree to your speculation that GPT5 will not destroy the world. I will not agree with your speculation because I have no idea if GPT5 will do that or not. This does not mean that I agree with the statement that GPT5 WILL destroy earth. It just means that I do not know.
I would not use Manifold as any data point in assessing the potential danger of future AI.
What would you use instead?
In particular, I’d be interested in knowing what probability you assign to the chance that GPT-5 will destroy the world and how you arrived at that probability.
Logan Zoellner thank you for your question,
In my view we need more research, not people that draw inferences on extremely complex matters from what random people without that knowledge bet on a given day. Its maybe fun entertainment, but it does not say anything about anything.
I do not assign any probabilities. To me it is just silly that whole assigning probabilities game surrounding x-risk and AI safety in general. How can anyone say for instance that it is a 10% risk of human extinction. What does that mean? Is that a 1 in 10 chance at a given moment, during a 23.7678 year period, forever or? And most importantly how do you come up with the figure 10%, based on what exactly?
I think it’s quite unlikely that GPT 5 will destroy the world. That said, I think it’s generally reasonable to doubt prediction markets on questions that can’t be fairly evaluated both ways.
compared to what?