I don’t understand what you are ultimately concluding is the result from that statement being false or not related to computation.
One thing I’m saying is that arguments of the form: “doing X is computationally intractable, therefore a superintelligence won’t be able to do X” are typically using a loose, informal definition of “computationally intractable” which I suspect makes the arguments not go through. Usually because X is something like “build nanotech”, but the actual thing that is provably NP-hard is something more like “solve (in full generality) some abstracted modelling problem which is claimed to be required to build nanotech”.
Another thing is that even if X itself is computationally intractable, something epsilon different from X might not be. Non-rhetorical question: what does it matter if utility maximization is not “ideal”?
The whole world model is a chaotic incomputable system. Example: how people will react to your plan, given no certain knowledge of which people will observer your plan
This is a great example. non-computability has a precise technical meaning. If you want to link this meaning to a claim about a plan in the physical world or limit on ability to model human behavior, you have to actually do the work to make the link in a precise and valid way. (I’ve never seen anyone do this convincingly.)
Another example is the halting problem. It’s provably undecidable, meaning there is no fully general algorithm that will tell you whether a program halts or not. And yet, for many important programs that I encounter in practice, I can tell at a glance whether they will halt or not, and prove it so. (In fact, under certain sampling / distribution assumptions, the halting problem is overwhelmingly likely to be solvable for a given randomly sampled program.)
One thing I’m saying is that arguments of the form: “doing X is computationally intractable, therefore a superintelligence won’t be able to do X” are typically using a loose, informal definition of “computationally intractable” which I suspect makes the arguments not go through.
Not really, for an extreme example, simulating the entire universe is computationally intractable because it would require more resources than in the universe. By intractable I personally just mean it requires much more effort to simulate/guess than experiment. An obvious real world example is cracking a hash like you said. None of this has to do with NP hardness, it’s more like an estimate on the raw computational power required, and whether that exceeds the computational resources theoretically available. Another factor is also the amount of uncertainty the calculation will necessarily have.
Another thing is that even if X itself is computationally intractable, something epsilon different from X might not be.
Nothing that reduces to X can be computationally intractable or X isn’t. I don’t see any way around adding uncertainty by computing X-e which differs from X. Again this is just simply stated as approximations. In a model to build nano tech, an easy way this pops up is if there are multiple valid hypotheses given raw data, but you need high powered tools to experimentally determine which one is right based on some unknown unknowns. (Say a constant that hasn’t been measured yet to some requisite precision). This isn’t too contrived as it pops up in the real world a lot.
If you want to link this meaning to a claim about a plan in the physical world or limit on ability to model human behavior, you have to actually do the work to make the link in a precise and valid way.
Easy, lots of independent decision makers means there are an exponential number of states the entire system can be in. I’m sure there are also a lot of patterns in human behavior but humans are also diverse in their values and reactions that it would be difficult to know with certainty how some humans may react. Therefore you can only assign probabilities of what state an action might cause a system to transition to. A lot of these probabilities will be best guesses as well so there will be some sort of compounding error.
Concretely in a world domination plan, social engineering isn’t guaranteed to work. All it takes is for one of your actions to get the wrong reaction, i.e. evoke suspicion, and you get shut down. And it may be impossible to know for certain if you can avoid that ending.
I am not trying to relate it to computability theory, just showing that you get compounding uncertainty. Simulation of quantum dynamics of sufficient scale is an obvious example.
Another example is the halting problem. It’s provably undecidable, meaning there is no fully general algorithm that will tell you whether a program halts or not. And yet, for many important programs that I encounter in practice, I can tell at a glance whether they will halt or not, and prove it so.
If you know whether a program falls in a set of of solvable programs, you’ve already solved the halting problem. I assume you still double check your logic. I also don’t see how you can prove anything for sufficiently large modules.
I don’t see why the halting problem is relevant here, nor do I see that paper proving anything about real world programs. It’s talking about arbitrary tapes and programs. I don’t see how it directly relates to real life programming or other problem classes.
And if you want to talk about NP hardness, it seems that decision makers regularly encounter NP hard problems. It’s not obvious that they’re avoidable or why they would be. I don’t see why an ASI would for example be able to avoid the hardness of determining optimal resource allocation. The only side step I can think of is not needing resources or using a suboptimal solution.
One thing I’m saying is that arguments of the form: “doing X is computationally intractable, therefore a superintelligence won’t be able to do X” are typically using a loose, informal definition of “computationally intractable” which I suspect makes the arguments not go through. Usually because X is something like “build nanotech”, but the actual thing that is provably NP-hard is something more like “solve (in full generality) some abstracted modelling problem which is claimed to be required to build nanotech”.
Another thing is that even if X itself is computationally intractable, something epsilon different from X might not be. Non-rhetorical question: what does it matter if utility maximization is not “ideal”?
This is a great example. non-computability has a precise technical meaning. If you want to link this meaning to a claim about a plan in the physical world or limit on ability to model human behavior, you have to actually do the work to make the link in a precise and valid way. (I’ve never seen anyone do this convincingly.)
Another example is the halting problem. It’s provably undecidable, meaning there is no fully general algorithm that will tell you whether a program halts or not. And yet, for many important programs that I encounter in practice, I can tell at a glance whether they will halt or not, and prove it so. (In fact, under certain sampling / distribution assumptions, the halting problem is overwhelmingly likely to be solvable for a given randomly sampled program.)
Not really, for an extreme example, simulating the entire universe is computationally intractable because it would require more resources than in the universe. By intractable I personally just mean it requires much more effort to simulate/guess than experiment. An obvious real world example is cracking a hash like you said. None of this has to do with NP hardness, it’s more like an estimate on the raw computational power required, and whether that exceeds the computational resources theoretically available. Another factor is also the amount of uncertainty the calculation will necessarily have.
Nothing that reduces to X can be computationally intractable or X isn’t. I don’t see any way around adding uncertainty by computing X-e which differs from X. Again this is just simply stated as approximations. In a model to build nano tech, an easy way this pops up is if there are multiple valid hypotheses given raw data, but you need high powered tools to experimentally determine which one is right based on some unknown unknowns. (Say a constant that hasn’t been measured yet to some requisite precision). This isn’t too contrived as it pops up in the real world a lot.
Easy, lots of independent decision makers means there are an exponential number of states the entire system can be in. I’m sure there are also a lot of patterns in human behavior but humans are also diverse in their values and reactions that it would be difficult to know with certainty how some humans may react. Therefore you can only assign probabilities of what state an action might cause a system to transition to. A lot of these probabilities will be best guesses as well so there will be some sort of compounding error.
Concretely in a world domination plan, social engineering isn’t guaranteed to work. All it takes is for one of your actions to get the wrong reaction, i.e. evoke suspicion, and you get shut down. And it may be impossible to know for certain if you can avoid that ending.
I am not trying to relate it to computability theory, just showing that you get compounding uncertainty. Simulation of quantum dynamics of sufficient scale is an obvious example.
If you know whether a program falls in a set of of solvable programs, you’ve already solved the halting problem. I assume you still double check your logic. I also don’t see how you can prove anything for sufficiently large modules.
I don’t see why the halting problem is relevant here, nor do I see that paper proving anything about real world programs. It’s talking about arbitrary tapes and programs. I don’t see how it directly relates to real life programming or other problem classes.
And if you want to talk about NP hardness, it seems that decision makers regularly encounter NP hard problems. It’s not obvious that they’re avoidable or why they would be. I don’t see why an ASI would for example be able to avoid the hardness of determining optimal resource allocation. The only side step I can think of is not needing resources or using a suboptimal solution.
This paper seems to explain it better than I can hope to: https://royalsocietypublishing.org/doi/10.1098/rstb.2018.0138
Getting stuck in a local maxima. This seems to undermine the whole atoms line of thought.