“What do AI safety/accelerationist people disagree on that they could bet on? What concrete things are going to happen in the next two years that would prove one party right or wrong?”
That seems to me quite confused. Why would you expect that concrete things appear in the next two years that can prove either side wrong?
A lot of what AI safety people worry is about, is the dynamics of a world where most of the power is held by AI.
Most power won’t be held by AI in the next two years so we can’t make any observations that tell us about the future dynamics.
AI safety people made some predictions like the difficulty of boxing AI which came true when we now see ChatGPT browsing the internet instead of being boxed but further similar predictions are unlikely to convince any accelerationist people. The same goes for predictions of autonomous weapon systems.
That seems to me quite confused. Why would you expect that concrete things appear in the next two years that can prove either side wrong?
A lot of what AI safety people worry is about, is the dynamics of a world where most of the power is held by AI.
Most power won’t be held by AI in the next two years so we can’t make any observations that tell us about the future dynamics.
AI safety people made some predictions like the difficulty of boxing AI which came true when we now see ChatGPT browsing the internet instead of being boxed but further similar predictions are unlikely to convince any accelerationist people. The same goes for predictions of autonomous weapon systems.