No, I do not think there is disagreement. You tell many people that X is a largest member of the set. They come up with members that are larger than X. If they give different answers, that is not disagreement of some kind. If X is particularly ill chosen for being the largest member of the set, then there can be enormous number of members larger than X.
If you want to claim substantial disagreement, e.g. if you claim that promoters of intelligence amplification see computer security as entirely unhelpful and a net increase in the risk, you got to provide examples (surely a sufficiently twisted reasoner can argue that computer security will be an annoying obstacle that will piss off the future cyborg overlord). edit: that’s it, I think claiming that those who disagree are selling their examples as ‘the best that could be done’ is a bit of uncharitable interpretation (or actually, a lot). For the most part, it’s just examples of what is better to do than FAI.
Holden presumably thinks that many academic AGI approaches are too risky since they are agent designs:
I believe that tools are safer than agents (even agents that make use of the best “Friendliness” theory that can reasonably be hoped for) and that SI encourages a focus on building agents, thus increasing risk.
Nick Szabo thinks working on mind uploading is a waste of time.
I personally promoted intelligence amplification and argued that working on security is of little utility.
Robin Hanson thinks the Singularity will be an important event that we can help make better by improving laws/institutions or advancing certain technologies ahead of others, and presumably would disagree that we should stop worrying about it.
Holden presumably thinks that many academic AGI approaches are too risky since they are agent designs:
He’s an example of biased selection in critics. No detailed critique from him wouldn’t have been heard if he didn’t take it seriously enough in the first place.
Nick Szabo thinks working on mind uploading is a waste of time.
You don’t work on mind uploading today, you work on neurology, that solves a lot of practical problems including treatments for disorders, and which may lead to uploading, or not. I am rather sceptical that the future mind uploading is a significant contributor to the utility of such work.
I personally promoted intelligence amplification and argued that working on security is of little utility.
I do think it is of little utility because I do not believe in some over the internet foom. But if such foom is given, then security can stop it (or rather, work on the tools that would allow provably unhackable software). Ultimately the topic is entirely speculative and you only make arguments by adopting some of the assumptions. With regards to ‘provably friendly AGI’, once again the important bit here is ‘provably’, that requires techniques and tools that are over the board useful what ever comes in the future (by improving our degree of reliable understanding and control over our creations of any kind), while the ‘friendly’ is something you can’t even work on without knowing how the ‘provably’ is going to be accomplished.
David Dalrymple criticized FAI and is working directly on mind uploading today, so apparently he disagrees with both you and Nick Szabo.
Nick Szabo explicitly suggested working on computer security so he seems to disagree with you about the utility. I disagree with you and him about whether provably unhackable software is feasible.
Do you think I’ve satisfied your request for examples of substantive disagreements? (I’d rather not go into object-level arguments since that’s not what this post is about.)
What I mean, is that I think most of the critics would agree that the approaches which they see as far fetched (and which you say they ‘disagree’ about) are still much more realizable than FAI.
Furthermore, the arguments are highly conditional on specific speculations which are taken to be true for sake of the argument. For example, if I am to assume that unfriendly AI would destroy the world, but such can be prevented with FAI, it means that the AI that is of the kind that is actually designed and can be controlled can be done in time. The algorithms relevant to making it cull the search space to manageable size are also highly relevant to the tools for solving of all sorts of technological problems including biomedical research for mind uploading. This line of argument by no means implies that I believe the mind uploading to be likely.
Furthermore the ‘provably friendly’ implies existence of much superior techniques of design of provably-something software; proving absence of e.g. buffer overruns and SQL injections is a task much more readily achievable.
It would be incredibly difficult to track all the cross-dependencies and rank the future technologies in the order of the appearance (something that may well have lower utility than just picking one and working on it), but you do not need to do that to see that some particularly spectacular solution (which is practically reliant on everything, including neurology for sake of figuring out and formally specifying what constitutes human in such a way that superintelligence wouldn’t come up with some really weird interpretation) is much further down the timeline than the other solutions.
This just bugs me too much.
No, I do not think there is disagreement. You tell many people that X is a largest member of the set. They come up with members that are larger than X. If they give different answers, that is not disagreement of some kind. If X is particularly ill chosen for being the largest member of the set, then there can be enormous number of members larger than X.
If you want to claim substantial disagreement, e.g. if you claim that promoters of intelligence amplification see computer security as entirely unhelpful and a net increase in the risk, you got to provide examples (surely a sufficiently twisted reasoner can argue that computer security will be an annoying obstacle that will piss off the future cyborg overlord). edit: that’s it, I think claiming that those who disagree are selling their examples as ‘the best that could be done’ is a bit of uncharitable interpretation (or actually, a lot). For the most part, it’s just examples of what is better to do than FAI.
Holden presumably thinks that many academic AGI approaches are too risky since they are agent designs:
Nick Szabo thinks working on mind uploading is a waste of time.
I personally promoted intelligence amplification and argued that working on security is of little utility.
Robin Hanson thinks the Singularity will be an important event that we can help make better by improving laws/institutions or advancing certain technologies ahead of others, and presumably would disagree that we should stop worrying about it.
He’s an example of biased selection in critics. No detailed critique from him wouldn’t have been heard if he didn’t take it seriously enough in the first place.
You don’t work on mind uploading today, you work on neurology, that solves a lot of practical problems including treatments for disorders, and which may lead to uploading, or not. I am rather sceptical that the future mind uploading is a significant contributor to the utility of such work.
I do think it is of little utility because I do not believe in some over the internet foom. But if such foom is given, then security can stop it (or rather, work on the tools that would allow provably unhackable software). Ultimately the topic is entirely speculative and you only make arguments by adopting some of the assumptions. With regards to ‘provably friendly AGI’, once again the important bit here is ‘provably’, that requires techniques and tools that are over the board useful what ever comes in the future (by improving our degree of reliable understanding and control over our creations of any kind), while the ‘friendly’ is something you can’t even work on without knowing how the ‘provably’ is going to be accomplished.
David Dalrymple criticized FAI and is working directly on mind uploading today, so apparently he disagrees with both you and Nick Szabo.
Nick Szabo explicitly suggested working on computer security so he seems to disagree with you about the utility. I disagree with you and him about whether provably unhackable software is feasible.
Do you think I’ve satisfied your request for examples of substantive disagreements? (I’d rather not go into object-level arguments since that’s not what this post is about.)
What I mean, is that I think most of the critics would agree that the approaches which they see as far fetched (and which you say they ‘disagree’ about) are still much more realizable than FAI.
Furthermore, the arguments are highly conditional on specific speculations which are taken to be true for sake of the argument. For example, if I am to assume that unfriendly AI would destroy the world, but such can be prevented with FAI, it means that the AI that is of the kind that is actually designed and can be controlled can be done in time. The algorithms relevant to making it cull the search space to manageable size are also highly relevant to the tools for solving of all sorts of technological problems including biomedical research for mind uploading. This line of argument by no means implies that I believe the mind uploading to be likely.
Furthermore the ‘provably friendly’ implies existence of much superior techniques of design of provably-something software; proving absence of e.g. buffer overruns and SQL injections is a task much more readily achievable.
It would be incredibly difficult to track all the cross-dependencies and rank the future technologies in the order of the appearance (something that may well have lower utility than just picking one and working on it), but you do not need to do that to see that some particularly spectacular solution (which is practically reliant on everything, including neurology for sake of figuring out and formally specifying what constitutes human in such a way that superintelligence wouldn’t come up with some really weird interpretation) is much further down the timeline than the other solutions.