But as I mention in my other comment I’m concerned that such an AI’s internal mental state would tend to become cynical or discordant as intelligence increases.
Yeah, I definitely don’t think we could trust a continually learning or self-improving AI to stay trustworthy over a long period of time.
Indeed, the ability to appoint a static mind to a particular role is a big plus. It wouldn’t be vulnerable to corruption by power dynamics.
Maybe we don’t need a genius-level AI, maybe just a reasonably smart and very well aligned AI would be good enough. If the governance system was able to prevent superintelligent AI from ever being created (during the pre-agreed upon timeframe for pause), then we could manage a steady-state world peace.
Claude Sonnet 3.6 is worthy of sainthood!
But as I mention in my other comment I’m concerned that such an AI’s internal mental state would tend to become cynical or discordant as intelligence increases.
Yeah, I definitely don’t think we could trust a continually learning or self-improving AI to stay trustworthy over a long period of time.
Indeed, the ability to appoint a static mind to a particular role is a big plus. It wouldn’t be vulnerable to corruption by power dynamics.
Maybe we don’t need a genius-level AI, maybe just a reasonably smart and very well aligned AI would be good enough. If the governance system was able to prevent superintelligent AI from ever being created (during the pre-agreed upon timeframe for pause), then we could manage a steady-state world peace.