See, the efficient ‘cross domain optimization’ in science fictional setting would make the AI able to optimize real world quantities. In real world, it’d be good enough (and a lot easier) if it can only find maximums of any mathematical functions.
Is it able to make a model of the world?
It is able to make a very approximate and bounded mathematical model of the world, optimized for finding maximums of a mathematical function of. Because it is inside the world and only has a tiny fraction of computational power of the world.
Are human reactions also part of this model?
This will make software perform at grossly sub-par level when it comes to making technical solutions to well defined technical problems, compared to other software on same hardware.
Are AI’s possible outputs also part of this model?
Another waste of computational power.
Are human reactions to AI’s outputs also part of this model?
Enormous waste of computational power.
I see no reason to expect your “general intelligence with Machiavellian tendencies” to be even remotely close in technical capability to some “general intelligence which will show you it’s simulator as is, rather than reverse your thought processes to figure out what simulator is best to show”. Hell, we do same with people, we design the communication methods like blueprints (or mathematical formulas or other things that are not in natural language) that decrease the ‘predict other people’s reactions to it’ overhead.
While in the fictional setting you can talk of a grossly inefficient solution that would beat everyone else to a pulp, in practice the massively handicapped designs are not worth worrying about.
‘General intelligence’ sounds good, beware of halo effect. The science fiction tends to accept no substitutes for the anthropomorphic ideals, but the real progress follows dramatically different path.
See, the efficient ‘cross domain optimization’ in science fictional setting would make the AI able to optimize real world quantities. In real world, it’d be good enough (and a lot easier) if it can only find maximums of any mathematical functions.
It is able to make a very approximate and bounded mathematical model of the world, optimized for finding maximums of a mathematical function of. Because it is inside the world and only has a tiny fraction of computational power of the world.
This will make software perform at grossly sub-par level when it comes to making technical solutions to well defined technical problems, compared to other software on same hardware.
Another waste of computational power.
Enormous waste of computational power.
I see no reason to expect your “general intelligence with Machiavellian tendencies” to be even remotely close in technical capability to some “general intelligence which will show you it’s simulator as is, rather than reverse your thought processes to figure out what simulator is best to show”. Hell, we do same with people, we design the communication methods like blueprints (or mathematical formulas or other things that are not in natural language) that decrease the ‘predict other people’s reactions to it’ overhead.
While in the fictional setting you can talk of a grossly inefficient solution that would beat everyone else to a pulp, in practice the massively handicapped designs are not worth worrying about.
‘General intelligence’ sounds good, beware of halo effect. The science fiction tends to accept no substitutes for the anthropomorphic ideals, but the real progress follows dramatically different path.