Just a note that in the link that Wei Dai provides for “Relevant powerful agents will be highly optimized”, Eliezer explicitly assigns ’75%′ to ‘The probability that an agent that is cognitively powerful enough to be relevant to existential outcomes, will have been subject to strong, general optimization pressures.’
Yeah, it’s worth noting that I don’t understand what this means. By my intuitive read of the statement, I’d have given it 95+% of being true, in the sense that you aren’t going to randomly stumble upon a powerful agent. But also by my intuitive read, the negative example given on that page would be a positive example:
An example of a scenario that negates RelevantPowerfulAgentsHighlyOptimized is KnownAlgorithmNonrecursiveIntelligence, where a cognitively powerful intelligence is produced by pouring lots of computing power into known algorithms, and this intelligence is then somehow prohibited from self-modification and the creation of environmental subagents.
On my view, known algorithms are already very optimized? E.g. Dijkstra’s algorithm is highly optimized for efficient computation of shortest paths.
So TL;DR idk what optimized is supposed to mean here.
Yeah, it’s worth noting that I don’t understand what this means. By my intuitive read of the statement, I’d have given it 95+% of being true, in the sense that you aren’t going to randomly stumble upon a powerful agent. But also by my intuitive read, the negative example given on that page would be a positive example:
On my view, known algorithms are already very optimized? E.g. Dijkstra’s algorithm is highly optimized for efficient computation of shortest paths.
So TL;DR idk what optimized is supposed to mean here.