It seems like the limitations on our intelligence have a protective effect.
I think this is wrong. It doesn’t take low intelligence to not act on that—just common sense/decision theory/whatever.
(The inverse is debatably the trickier part—not ‘Basilisks’ but affecting the future around ‘AI’s to affect the future further along. In other words, avoiding the downside seems easy, but going after upsides, which may or may not payoff in your life time—that seems harder, and has led to more debate around ‘how consequentialist should I be?’)
More generally, ‘not losing a lot to say, the market’ is easy—don’t do it. Avoiding the downside is easy—catching the upside, and not getting hit with the downside is harder.
I think this is wrong. It doesn’t take low intelligence to not act on that—just common sense/decision theory/whatever.
(The inverse is debatably the trickier part—not ‘Basilisks’ but affecting the future around ‘AI’s to affect the future further along. In other words, avoiding the downside seems easy, but going after upsides, which may or may not payoff in your life time—that seems harder, and has led to more debate around ‘how consequentialist should I be?’)
More generally, ‘not losing a lot to say, the market’ is easy—don’t do it. Avoiding the downside is easy—catching the upside, and not getting hit with the downside is harder.