Can something be optimization-like without being ontologically mental? In other words, if a higher level is a universal Turing machine that devotes more computing resources to other Turing machines depending on how many 1s they’ve written so far as opposed to 0s, is that the sort of optimization-like thing we’re talking about? I’m assuming you don’t mean anything intrinsically teleological.
Yeah, I think if base-level reality started out optimization-like, it’s not mind-like, or at least not any kind of mind that we’d be familiar with. It might be something like Schmidhuber’s Goedel Machine with a relatively simple objective function.
Can something be optimization-like without being ontologically mental? In other words, if a higher level is a universal Turing machine that devotes more computing resources to other Turing machines depending on how many 1s they’ve written so far as opposed to 0s, is that the sort of optimization-like thing we’re talking about? I’m assuming you don’t mean anything intrinsically teleological.
Yeah, I think if base-level reality started out optimization-like, it’s not mind-like, or at least not any kind of mind that we’d be familiar with. It might be something like Schmidhuber’s Goedel Machine with a relatively simple objective function.
What does “intrinsically teleological” mean?