Eliezer seems to be relatively confident that AI systems will be very alien and will understand many things about the world that humans don’t, rather than understanding a similar profile of things (but slightly better), or having weaker understanding but enjoying other advantages like much higher serial speed. I think this is very unclear and Eliezer is wildly overconfident. It seems plausible that AI systems will learn much of how to think by predicting humans even if human language is a uselessly shallow shadow of human thought, because of the extremely short feedback loops. It also seems quite possible that most of their knowledge about science will be built by an explicit process of scientific reasoning and inquiry that will proceed in a recognizable way to human science even if their minds are quite different. Most importantly, it seems like AI systems have huge structural advantages (like their high speed and low cost) that suggest they will have a transformative impact on the world (and obsolete human contributions to alignmentretracted) well before they need to develop superhuman understanding of much of the world or tricks about how to think, and so even if they have a very different profile of abilities to humans they may still be subhuman in many important ways.
It seems to me that this claim is approximately equivalent to “takeoff will be soft, not hard”. In hard takeoff world, it seems straightforward that AI systems will understand huge important part/dynamics of the world, in ways that humans don’t, even a little?
Number 22:
It seems to me that this claim is approximately equivalent to “takeoff will be soft, not hard”. In hard takeoff world, it seems straightforward that AI systems will understand huge important part/dynamics of the world, in ways that humans don’t, even a little?