This comment seems to presuppose that the things AIs want have no value from our perspective. This seems unclear and this post partially argues against this (or at least this being true relative to the bulk of human resource use).
I do agree with the claim that negligible value will be produced “in the minds of laboring AIs” relative to value from other sources.
I’m talking about probabilities. Aligned AIs want things that we value in 100% cases by definition. Unaligned AIs can want things that we value and things that we don’t value at all. Even if we live in very rosy universe where unaligned AIs want things that we value in 99% of cases, 99% is strictly less than 100%.
My general objection was to the argumentation based on likelihood of consciousness in AIs as they developed without accounting for “what conscious AIs actually want to do with their consciousness”, which can be far more important because the feature of intelligence is the ability to turn unlikely states into likely.
I think Matthew’s thesis can be summarized as “if you’re a scope-sensitive utilitarian perspective, AIs which are misaligned and seek power look similarly aligned in terms of their galactic resource utilization (or more aligned) with you as you are aligned with other humans”.
I agree that if the AI was aligned with you you would strictly prefer that.
This comment seems to presuppose that the things AIs want have no value from our perspective. This seems unclear and this post partially argues against this (or at least this being true relative to the bulk of human resource use).
I do agree with the claim that negligible value will be produced “in the minds of laboring AIs” relative to value from other sources.
I’m talking about probabilities. Aligned AIs want things that we value in 100% cases by definition. Unaligned AIs can want things that we value and things that we don’t value at all. Even if we live in very rosy universe where unaligned AIs want things that we value in 99% of cases, 99% is strictly less than 100%.
My general objection was to the argumentation based on likelihood of consciousness in AIs as they developed without accounting for “what conscious AIs actually want to do with their consciousness”, which can be far more important because the feature of intelligence is the ability to turn unlikely states into likely.
I think Matthew’s thesis can be summarized as “if you’re a scope-sensitive utilitarian perspective, AIs which are misaligned and seek power look similarly aligned in terms of their galactic resource utilization (or more aligned) with you as you are aligned with other humans”.
I agree that if the AI was aligned with you you would strictly prefer that.