give THEM plausibility deniability about having to understand or know things based on their own direct assessment
I don’t follow what you are getting at here.
I’m just thinking about historical cases of catastrophic risk, and what was done. One thing that was done, was the the government payed very clever people to put together models of what might happen.
My feeling is that the discussion around AI risk is stuck in an inadequate equilibrium, where everyone on the inside thinks its obvious but people on the outside don’t grok it. I’m trying to think of the minimum possible intervention to bridge that gap, something very very different from your ‘talk … for several days face-to-face about all of this’. As you mentioned, this is not scalable.
On a simple level, all exponential explosions work on the same principle, which is that there’s some core resource, and in each unit of time, the resource is roughly linearly usable to cause more of the resource to exist and be similarly usable.
Neutrons in radioactive material above a certain density causes more neutrons and so on to “explosion”.
Prions in living organisms catalyze more prions, which catalyze more prions, and so on until the body becomes “spongiform”.
Oxytocin causes uterine contractions, and uterine contractions are rigged to release more oxytocin, and so on until “the baby comes out”.
“Agentic effectiveness” that loops on itself to cause more agentic effectiveness can work the same way. The inner loop uses optimization power to get more optimization power. Spelling out detailed ways to use optimization power to get more optimization power is the part where it feels like talking about zero-days to me?
Maybe its just that quite a few people literally don’t know how exponential processes work? That part does seem safe to talk about, and if it isn’t safe then the horse is out of the barn anyway. Also, if there was a gap in such knowledge it might explain why they don’t seem to understand this issue, and it would also explain why many of the same people handled covid so poorly.
Do you have a cleaner model of the shape of the ignorance that is causing the current policy failure?
I don’t follow what you are getting at here.
I’m just thinking about historical cases of catastrophic risk, and what was done. One thing that was done, was the the government payed very clever people to put together models of what might happen.
My feeling is that the discussion around AI risk is stuck in an inadequate equilibrium, where everyone on the inside thinks its obvious but people on the outside don’t grok it. I’m trying to think of the minimum possible intervention to bridge that gap, something very very different from your ‘talk … for several days face-to-face about all of this’. As you mentioned, this is not scalable.
On a simple level, all exponential explosions work on the same principle, which is that there’s some core resource, and in each unit of time, the resource is roughly linearly usable to cause more of the resource to exist and be similarly usable.
Neutrons in radioactive material above a certain density causes more neutrons and so on to “explosion”.
Prions in living organisms catalyze more prions, which catalyze more prions, and so on until the body becomes “spongiform”.
Oxytocin causes uterine contractions, and uterine contractions are rigged to release more oxytocin, and so on until “the baby comes out”.
(Not all exponential processes are bad, just most. It is an idiom rarely used by biology, and when biology uses the idiom it tends to be used to cause phase transitions where humble begins lead to large outcomes.)
“Agentic effectiveness” that loops on itself to cause more agentic effectiveness can work the same way. The inner loop uses optimization power to get more optimization power. Spelling out detailed ways to use optimization power to get more optimization power is the part where it feels like talking about zero-days to me?
Maybe its just that quite a few people literally don’t know how exponential processes work? That part does seem safe to talk about, and if it isn’t safe then the horse is out of the barn anyway. Also, if there was a gap in such knowledge it might explain why they don’t seem to understand this issue, and it would also explain why many of the same people handled covid so poorly.
Do you have a cleaner model of the shape of the ignorance that is causing the current policy failure?