In complicated systems in real life, the thing that is better at “preimaging outcomes onto choices” is the scary one, and the interesting / complicated systems are the ones where the choosing algorithm is complex.
Sure, it’s true that you can construct toy systems in restricted domains (like the mushrooms and peppers one) and define “agents” in these systems which technically violate certain efficiency assumptions.
But the reason these examples aren’t compelling (to me) is that it’s kind of obvious what all the agents in them will do, once you write down their utility functions and the starting resources available to them. There’s not much complexity “left over” for interesting decision algorithms.
Two of the real-world examples in this dialogue actually demonstrate the difference between these kinds of systems nicely:
I could not step into the shoes of a successful hedge fund trader, and, given all the same choices and resources available to the trader, make decisions which result in more money in my trading account than than the original trader could.
OTOH, if I were some kind of ghost-in-the-machine of a bacterium making ATP, I could (probably) make the same (or better, in cases where that’s possible) decisions that the actual bacterium is making, given all the same information and choices to available to it. (Though I might need a computer to keep track of all the hormones and blood-glucose levels and feedback loops.)
I can see how both examples might tell us something useful about intelligent systems, but the markets example seems more likely to have something to say about what the actual scary thing looks like.
This recent tweet of Eliezer’s crystallized a concept for me which I think is relevant to the concepts of optimization and agents discussed in the dialogue: https://twitter.com/ESYudkowsky/status/1639406023680344064
In complicated systems in real life, the thing that is better at “preimaging outcomes onto choices” is the scary one, and the interesting / complicated systems are the ones where the choosing algorithm is complex.
Sure, it’s true that you can construct toy systems in restricted domains (like the mushrooms and peppers one) and define “agents” in these systems which technically violate certain efficiency assumptions.
But the reason these examples aren’t compelling (to me) is that it’s kind of obvious what all the agents in them will do, once you write down their utility functions and the starting resources available to them. There’s not much complexity “left over” for interesting decision algorithms.
Two of the real-world examples in this dialogue actually demonstrate the difference between these kinds of systems nicely:
I could not step into the shoes of a successful hedge fund trader, and, given all the same choices and resources available to the trader, make decisions which result in more money in my trading account than than the original trader could.
OTOH, if I were some kind of ghost-in-the-machine of a bacterium making ATP, I could (probably) make the same (or better, in cases where that’s possible) decisions that the actual bacterium is making, given all the same information and choices to available to it. (Though I might need a computer to keep track of all the hormones and blood-glucose levels and feedback loops.)
I can see how both examples might tell us something useful about intelligent systems, but the markets example seems more likely to have something to say about what the actual scary thing looks like.