Preparing anyway even if it’s very low probability because of extreme consequences is Pascal’s Wager
ASI is probably coming sooner or later. Someone has to prepare at some point, the question is when.
I consider AI development to be a field that I have little definite info about. Its hard to assign less than 1% prob to statements about ASI. (Excepting the highly conjoined ones.) I don’t consider things like dinosaur killing asteroids with 1 in 100 million probs to be pascal muggings.
If and when a canary “collapses” we will have ample time to design off switches and identify red lines we don’t want AI to cross
We have a tricky task, and we don’t know how long it will take. Having hit one of these switches doesn’t help us to do the task much. A student is given an assignment in August, the due date is March next year. They decide to put it off until it snows. Snowfall is an indicator that the due date is coming soon, but not a good one. But either way, it doesn’t help you do the assignment.
What is a “fully self driving” car, we have had algorithms that kind of usually work for years, and a substantial part of modern progress in the field looks like gathering more data, and developing driving specific tricks. Suppose that you needed 100 million hours of driving data to train current AI systems. A company pays drivers to put a little recording box in their car. It will take 5 years to gather enough data, and after that we will have self driving cars. What are you going to do in 5 years time that you can’t do now. In reality, we aren’t sure if you need 50, 100 or 500 million hours of driving data with current algorithms, and aren’t sure how many people will want the boxes installed. (These boxes are usually built into satnavs or lane control systems in modern cars)
Limited versions of the Turing test (like Winograd Schemas)
What percentage do you want, and what will you do when gpt-5 hits it?
We are decades away from the versatile abilities of a 5 year old
A “result” got by focussing on the things that 5 year olds are good at,
Sometimes you have a problem like looking at an image of some everyday scene and saying whats happening in it that 5 year olds are (or at least were a few years ago) much better at. Looking at a load of stock data and using linear regression to find correlations between prices, nothing like that existed in the environment of evolutionary adaptedness, human brains aren’t built to do that.
If and when a canary “collapses” we will have ample time to design off switches and identify red lines we don’t want AI to cross
Even if that was true, how would you know that? Technological progress is hard to predict. Designing off switches is utterly trivial if the system isn’t trying to avoid the off switch being pressed, and actually quite hard if the AI is smart enough to know about the off switch and remove it.
ASI is probably coming sooner or later. Someone has to prepare at some point, the question is when.
I consider AI development to be a field that I have little definite info about. Its hard to assign less than 1% prob to statements about ASI. (Excepting the highly conjoined ones.) I don’t consider things like dinosaur killing asteroids with 1 in 100 million probs to be pascal muggings.
We have a tricky task, and we don’t know how long it will take. Having hit one of these switches doesn’t help us to do the task much. A student is given an assignment in August, the due date is March next year. They decide to put it off until it snows. Snowfall is an indicator that the due date is coming soon, but not a good one. But either way, it doesn’t help you do the assignment.
What is a “fully self driving” car, we have had algorithms that kind of usually work for years, and a substantial part of modern progress in the field looks like gathering more data, and developing driving specific tricks. Suppose that you needed 100 million hours of driving data to train current AI systems. A company pays drivers to put a little recording box in their car. It will take 5 years to gather enough data, and after that we will have self driving cars. What are you going to do in 5 years time that you can’t do now. In reality, we aren’t sure if you need 50, 100 or 500 million hours of driving data with current algorithms, and aren’t sure how many people will want the boxes installed. (These boxes are usually built into satnavs or lane control systems in modern cars)
What percentage do you want, and what will you do when gpt-5 hits it?
A “result” got by focussing on the things that 5 year olds are good at,
Sometimes you have a problem like looking at an image of some everyday scene and saying whats happening in it that 5 year olds are (or at least were a few years ago) much better at. Looking at a load of stock data and using linear regression to find correlations between prices, nothing like that existed in the environment of evolutionary adaptedness, human brains aren’t built to do that.
Even if that was true, how would you know that? Technological progress is hard to predict. Designing off switches is utterly trivial if the system isn’t trying to avoid the off switch being pressed, and actually quite hard if the AI is smart enough to know about the off switch and remove it.