Regarding the friendly neighborhood politician AGI: [Edit, I see you were going somewhere else with this point]
your friendly neighborhood AGI that wants you to like its output, to really like it, so it tells you what you will be happy to hear every time even if the results would be quite bad.
Does that kill you (as in, kill everyone)?
It certainly could kill you. Certainly it will intentionally choose errors over correct answers in some situations. But so will humans. So will politicians. We don’t exactly make the best possible decisions or avoid bias in our big choices. This seems like a level of error that is often going to be survivable.
Politicians didn’t kill us because they are slow and others have time to respond AND we now have in place a “democracy” which is a pretty strong attempt at making politicians care about people’s opinions, AND we can kind of predict politicians, they are not alien, and there are transparency mechanisms in place. I sometimes forget that even with all of these things in place politicians still kill tons of people, but Zvi is kind enough to remind me with his posts. ;)
Regarding the friendly neighborhood politician AGI: [Edit, I see you were going somewhere else with this point]
Politicians didn’t kill us because they are slow and others have time to respond AND we now have in place a “democracy” which is a pretty strong attempt at making politicians care about people’s opinions, AND we can kind of predict politicians, they are not alien, and there are transparency mechanisms in place. I sometimes forget that even with all of these things in place politicians still kill tons of people, but Zvi is kind enough to remind me with his posts. ;)