Does anyone know if Roko or Hollerith developed the idea much further?
Roko combined the conccept with the (rather less sensible) idea of promoting those instrumental values into terminal values—and was met with a chorus of “Unfriendly AI”.
I was a little bit concerned about your initial Omohundro reaction.
Omohundro’s material is mostly fine and interesting. It’s a bit of a shame that there isn’t more maths—but it is a difficult area where it is tricky to prove things. Plus, IMO, he has the occasional zany idea that takes your brain to interesting places it didn’t dream of before.
Interesting, hadn’t seen Hollerith’s posts before. I came to a similar conclusion about AIXI’s behavior as exemplifying a final attractor in intelligent systems with long planning horizons.
If the horizon is long enough (infinite), the single behavioral attractor is maximizing computational power and applying it towards extensive universal simulation/prediction.
This relates to simulism and the SA, as any superintelligences/gods can thus be expected to create many simulated universes, regardless of their final goal evaluation criteria.
In fact, perhaps the final goal criteria applies to creating new universes with the desired properties.
Roko combined the conccept with the (rather less sensible) idea of promoting those instrumental values into terminal values—and was met with a chorus of “Unfriendly AI”.
Hollerith produced several pages on the topic.
Probably the best-known continuation is via Omohundro.
“Universal Instrumental Values” is much the same idea as “Basic AI drives” dressed up a little differently:
http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/
http://selfawaresystems.com/2007/10/05/paper-on-the-nature-of-self-improving-artificial-intelligence/
You are right. I hadn’t made that connection. Now I have a little more respect for Omohundro’s work.
I was a little bit concerned about your initial Omohundro reaction.
Omohundro’s material is mostly fine and interesting. It’s a bit of a shame that there isn’t more maths—but it is a difficult area where it is tricky to prove things. Plus, IMO, he has the occasional zany idea that takes your brain to interesting places it didn’t dream of before.
I maintain some Omohundro links here.
As a side point, you could also re-read “Basic AI drives” as “Basic Replicator Drives”—it’s systemic evolution.
Interesting, hadn’t seen Hollerith’s posts before. I came to a similar conclusion about AIXI’s behavior as exemplifying a final attractor in intelligent systems with long planning horizons.
If the horizon is long enough (infinite), the single behavioral attractor is maximizing computational power and applying it towards extensive universal simulation/prediction.
This relates to simulism and the SA, as any superintelligences/gods can thus be expected to create many simulated universes, regardless of their final goal evaluation criteria.
In fact, perhaps the final goal criteria applies to creating new universes with the desired properties.