The relevance for LW is that for a believer in “emergence”, the problem of creating artificial intelligence (although not necessarily friendly one) is simply a question of having enough computing power to simulate a sufficiently large number of neurons.
I don’t think in practice that has much to do with whether or not someone uses the word emergence. As far as a I understand EY thinks that if you simulate enough neurons sufficiently well you get something that’s conscious.
I understand EY thinks that if you simulate enough neurons sufficiently well you get something that’s conscious.
Without specifying the arrangements of those neurons? Of course it should if you copy the arrangement of neurons out of a real person, say, but that doesn’t sound like what you meant.
I don’t think in practice that has much to do with whether or not someone uses the word emergence. As far as a I understand EY thinks that if you simulate enough neurons sufficiently well you get something that’s conscious.
I would really want a cite on that claim. It doesn’t sound right.
Can you be more specific about what you are skeptic about?
Without specifying the arrangements of those neurons? Of course it should if you copy the arrangement of neurons out of a real person, say, but that doesn’t sound like what you meant.